The Apple Watch Series 9 and the Apple Watch Ultra 2 hit shelves a couple of weeks ago, and with them they’ve brought all sorts of exciting features. And the new upgrades seemed to have a theme this year.
While I was watching Apple’s 2023 Wonderlust event online, I noticed that the software and hardware updates the company has brought to the Apple Watch. Apple seems to be championing connectivity, with features designed to let you better connect with your health, better connect with your other Apple devices, and even better connect with folks around you. NameDrop is designed for the latter, and it lets you share contact information almost instantly.
Here’s what we know about NameDrop for the Apple Watch. For more, here’s how to upgrade to iOS 17 and how to download iOS 17.
What is NameDrop?
NameDrop is a new way for Apple Watch users to share contact information, with a simple and subtle movement of the wrist. NameDrop works much like AirDrop does to quickly share a file or send a photo. Gone are the days of typing phone numbers or handing over your phone so someone can put their number into your contacts.
During its event, Apple showed two people placing their Apple Watches near each other to swap contact info. Apple says this is enabled by the brand-new S9 chip.
How can I get NameDrop?
Though the feature was announced as part of WatchOS 10, the fine print of Apple’s breakdown of both WatchOS 10 and iOS 17 reveals that NameDrop will officially be coming to Apple Watches “later this year.”
On Wednesday, Sept. 27, Apple released Watch OS 10.1 developer beta, which gave those who downloaded the update a chance to finally make use of NameDrop on the Apple Watch. If you haven’t already updated your watch to Watch OS 10, and you want to make use of NameDrop on your watch but don’t want to run beta software, you’ll need to wait for it to be available widely.
Can I use NameDrop if I don’t have an Apple Watch?
If you’re an iPhone user and you want to get in on all the NameDrop action, you’re in luck. Apple’s iOS 17, which has finally arrived, comes with NameDrop capabilities. So, if you’re an iPhone user running iOS 17, you’ll be able to use NameDrop to immediately swap contacts with another iPhone user running iOS 17, and also with Apple Watch users running Watch OS 10.1.
How does NameDrop Work?
Apple calls NameDrop a “new AirDrop experience” where “a user can hold their iPhone near another to share their contact information with only their intended recipient.” Apple also said that users will be able to pick and choose what information gets shared over NameDrop.
According to MacRumors, all you need to do is navigate to the Contacts app, select your contact and then Share. After that, your watch should direct you to place your watch near another Apple Watch or iPhone to share your contact information.
Apple has also said that watch users will be able to use NameDrop by tapping the My Card watch face complication and then bringing their Apple Watch face to face with someone else’s Apple Watch or iPhone. It’s important to note that NameDrop can only occur between two devices if they are either running Watch OS 10.1 or iOS 17.
For more, here’s everything on the Apple Watch Series 9 and Apple Watch Ultra 2. And here’s how to turn off NameDrop on your iPhone.
The Google Pixel 8 is unlike any other phone you can buy today. After testing it for nearly a week, I realized that the Pixel 8 and its AI features offer an early glimpse at how photography, calling and even our phone’s wallpaper could evolve. I wouldn’t run out and buy a Pixel 8 today solely because of these features, but it certainly shows how AI will continue to be integrated into our daily lives.
For example, there’s a feature called Best Take (which I keep calling Face Swap) that works when taking a few consecutive photos of a person or group of people. After taking the photos, I can pull up a face and head editor (my term, not Google’s) in Google Photos and swap out a person’s head with a different version from another image from the series to get their best facial expression. The result is a photo where everyone’s eyes are open and smiling, if that’s what you want.
Philosophically, this photo doesn’t actually exist because the moment never happened. And yet, here it is.
At first, Best Take creeped me out. But after applying it to a variety of photos of friends and coworkers, I think it’s incredible and works remarkably well. I can’t see any lines where the heads were swapped. (Yep, I actually wrote that.) And I still can’t fully get my head around the possibilities that Best Take opens up. It’s the start of a path where our photography can be even more curated and polished, even if the photos we take don’t start out that way.
Will Best Take make us more vain by giving us another tool to present a seemingly ideal version of ourselves online? Or should I just enjoy that I can have a nice photo where my friends look their best at that possible moment? I’m still conflicted.
Starting at $699 ( 699, AU$1,199) the Google Pixel 8 is an ideal phone for most people. And even if you aren’t interested in the AI features (and there are a lot of them), the Pixel 8’s updated design, display and cameras make it one of the best non-Pro Google phones. The higher price seems worth it and makes you realize just how incredibly affordable the Pixel 6 and 7 were.
The Pixel 8 is smaller, lighter and brighter
The Pixel 8 is like a svelte version of the Pixel 7. It’s shorter and less wide, although it’s actually 0.2 millimeters thicker. It weighs 10 grams less than the Pixel 7. Combine all that with the fact that nearly every edge is rounded over, and you’ll find, like me, that the Pixel 8 is incredibly comfortable to hold, with or without a case.
My review unit is the rose color, which in most lighting looks peach.
The screen is smaller, but so are the bezels around it. The 6.2-inch display now has a 60 to 120Hz variable refresh rate for smoother scrolling and a higher max brightness. It’s easy to see under bright sunlight, especially compared with the Pixel 7 and 7A, which look dim by comparison. Watching films, playing games or just admiring mundane Android 14 animations were all enjoyable on the Pixel 8’s screen.
The back is still defined by that wide camera bar, which I like. Overall, the Pixel 8 looks dapper and almost chic.
The Pixel 8 has a new main camera sensor
The Camera app has a slight redesign. There are two icons at the bottom: a photo camera and a video camera. When I tap on the photo camera, all the modes under the viewfinder change to photo-centric ones like portrait mode and long exposure mode. And when I tap on the video camera symbol, the modes become specific for video like slow motion. It took about a day to get used to this change, but I’m a fan of the new layout.
Also, the Pixel 8 has a new macro focus feature that kicks in when the phone’s main camera is within centimeters of a subject. I like this addition and found it useful for food and coffee snaps, where I want to get close enough for the cup or plate to fill the frame while remaining in focus.
The 12-megapixel ultrawide camera is the same as the one on previous Pixel models. The 50-megapixel main camera has a new sensor that Google claims can collect 21% more light. The front-facing camera is also new, but still only has fixed focus on the Pixel 8 – compared with the Pixel 8 Pro’s front-facing autofocus camera.
Take a look below at some of my favorite photos that I took with the Pixel 8.
Does the Pixel 8 take better photos than the Pixel 7? Yes. But they’re not dramatically different. Check out the photos below of the Manhattan Bridge. Both look great.
But if I punch in on where the main upright section intersects with the deck, you can see that the Pixel 8’s image below has more detail and sharpness. Notice the individual rivets on the metal uprights compared to the Pixel 7’s photo, where most of them don’t show up.
The Pixel 8 is defined by its camera AI tools
The Pixel 8’s AI features like Best Take steal the show when it comes to the camera. By the way, Best Take not only works on photos taken with the phone, but any photos in your Google Photos library on your Pixel 8. I used Best Take to swap out faces in photos I took on the iPhone 15 Pro and Pro Max as part of my review last month.
Best Take only works on a series of photos in a sequence. So you can’t take a photo of yourself at 40 and replace your head with one from a photo of when you were in your 20s. Also, the feature works only on people. But I can almost hear Google’s SVP for devices and services, Rich Osterloh, say at the Pixel 9 or Pixel 10 launch, “we heard you and we now offer Best Take for pets.” Applause and cheers!
Remember the Magic Eraser that debuted on the Pixel 6 that lets you remove a distraction from your photo like someone in the background? Now there’s Magic Editor on the Pixel 8 and 8 Pro, which lets you do even more. It can remove someone from the background or even move your photo’s subject and resize it. I replaced the sky, ground and even the entire background in some photos. Basically, if you see someone jumping abnormally high in a photo taken on a Pixel 8, beware: that person may have used the Magic Editor to exaggerate things.
For example, below is a photo I took of CNET’s Tara Brown and Theo Liggians jumping off a rock. I used the Magic Eraser, which has been around for a couple of years, to remove the rock. It’s obvious something was altered.
Below is the same photo after I used the new Magic Editor. It’s not perfect, but it’s an improvement over the Magic Eraser photo.
But why stop there? I tried the Magic Editor a second time and replaced the actual zigzag tiled ground and artificial turf with one entirely made of bricks. This obscured the shadow left from the rock and made everything look more uniform. I have no idea where the Pixel 8 came up with the replacement ground since there weren’t any bricks in the actual surface and that’s one of the downsides to using AI this way.
A majority of the time, Magic Editor generations took a matter of seconds. It didn’t feel like I waited long. But there were a couple of times where it took 10 seconds and another one time where I had to close the Google Photos app because it seemed stuck. Best Take feels like a more mature feature than Magic Editor which still has an experimental vibe to it.
The Magic Editor is a lot of fun, and I truly believe it adds another level of creativity to phone photography. But it does so while raising ethical questions around image manipulation. Fortunately, Magic Editor-generated photos have flaws, or at least the majority of the photos that I used it on did. I can usually spot the differences between photos I applied it on versus unedited images. The way the AI tries to fill in the background usually results in something looking off. But not everyone may be as photo-savvy as me, and I imagine that Google will keep refining the AI behind it to a point where it’s hard to tell the difference between an altered photo and an unedited one.
There’s one more AI-powered camera tool called Audio Magic Eraser, which can clean up audio in recorded videos for better clarity. It removes distracting background noise or music that might interfere with your video’s audio. Watch the review video that accompanies this article to see examples of it before and after, with and without the Audio Magic Eraser applied.
Using the tool was easy enough. However, the live preview didn’t reflect the changes to the audio that I made, which seems like a bug. I had to save a copy of the edited video to hear the difference Audio Magic Eraser made – which was impressive.
Taken in total, the Pixel 8 has an outstanding camera system with a lot of features you just don’t find on the more expensive iPhone 15 and Galaxy S23. It’s fascinating to see Google, who has an excellent reputation when it comes to getting the best images out of a phone camera, still be able to match the likes of Apple and Samsung all while leaning heavily into these hit-or-miss AI camera features.
My CNET colleague Andrew Lanxon, who is reviewing the Pixel 8 Pro, wasn’t as enamored with Best Take as I was.
Pixel 8 performance and battery
Google’s Tensor chip has never been about pure horsepower. Instead, it targets optimizing specific tasks and powering all those AI capabilities mentioned earlier. And that’s the case for the Pixel 8’s Tensor G3 chip. During my six days with the phone, I never ran into any performance issues. When I was downloading Genshin Impact and PUBG Mobile and setting them up, the Pixel 8 did get very warm. But that was the only time I noticed this happen.
The G3 chip along with Android 14 makes the Pixel 8 a delight to use. There’s Face Unlock, which is once again secure enough to use for contactless payments, even without the Pixel 4’s fancy radar sensor. There are strange non-camera AI features like AI Wallpaper that creates an original wallpaper for your home screen based on MadLibs-style prompts. You can’t enter your own words and are limited to a list of words for each entry. I can also use Google Assistant to read articles aloud, and there’s a summarize feature that I never got to work. Anytime I asked the Assistant to summarize a story I’m met with the message, “Sorry I can’t summarize on this device yet.”
Call Screen debuted on the 2018 Pixel 3 and lets the Google Assistant answer a call while you listen in. On the Pixel 8, the Call Screen’s voice sounds like an actual human. I can’t tell if it’s fake or a recording of an actual human voice, which shows you how far Google has pushed this technology over the past five years.
The Pixel 8 has a slightly bigger battery than last year’s Pixel 7. Over the six days, the Pixel 8 had no problem making it through a full day on a single charge. I’m still working on running CNET’s arsenal of battery life tests and performance benchmarks, so check back soon for the results.
Final word on the Pixel 8
The two biggest changes to the Pixel 8 don’t have anything to do with the physical phone. The first is the price. The Pixel 8 costs $699, which is $100 more than the Pixel 6 and 7’s launch prices. But the updates you get, like the refined design, new display and main camera improvements, are worth it. And the Pixel 8’s higher price is more reflective of just how ridiculously affordable Google priced the Pixel 6 and 7 compared with other major smartphone makers at the time. Outside the US, the Pixel 8’s price increase might be steeper.
The other big feature is software support. The Pixel 8 will receive seven years of OS support, which is longer than most Android phones. But the Fairphone 5 takes the crown with 8 years of support. Will the Pixel 8 survive until 2030? Maybe? I don’t know.
But in that spirit, I casually polled some of my CNET colleagues over Slack to ask them what was the longest amount of time they owned a phone. Most kept their phones for less than 5 years, but a few of my coworkers had nearly 6-year-old phones like the iPhone 8 and Galaxy Note 8. So perhaps there is an audience for this benefit.
I recommend the Pixel 8 to anyone coming from a Pixel 6A or older, or any phone that’s at least three years old. If you’re trying to decide between the Pixel 7A and Pixel 8, know that the Pixel 8 is better in nearly every way but does cost $200 more. And in terms of the Pixel 8 and 8 Pro, you get 85% of that Pixel 8 Pro experience on the regular Pixel 8. The Pixel 8 Pro has a nicer screen, a new higher resolution ultra wide camera, more RAM and a dedicated 5x telephoto camera, all of which the Pixel 8 lacks. The Pixel 8 Pro also has more camera tools, like video boost, compared to the Pixel 8.
With the Pixel 8’s launch, Google’s current lineup has a phone for every budget: $499 for the Pixel 7A, $699 for the Pixel 8, $999 for the Pixel 8 Pro and $1,799 for the Pixel Fold. The Pixel 8 isn’t Google’s most affordable device, but it’s a phone most people should consider. Its AI features, reasonable price and seven years of software support help it stand out among its Pixel siblings.
Google Pixel 8 specs vs. Pixel 8 Pro, Pixel 7A, Pixel 7
5G (Sub 6, mmWave); VPN by Google One; 7 years of OS, security and Feature Drop updates; front-facing camera has autofocus; 13W Qi wireless charging; 30W wired charging; USB-3.2 speeds via USB-C; IP68 dust and water resistance; Gorilla Glass Victus 2 on front and back
5G (Sub 6 and mmWave); VPN by Google One; 7 years of OS, security and Feature Drop updates; front-facing camera has autofocus; 13W Qi wireless charging; 30W wired charging; USB-3.2 speeds via USB-C; IP68 dust and water resistance; Gorilla Glass Victus 2 on front and back
5G, Magic Eraser, Photo Unblur, Real Tone, Face Unblur, Long Exposure Mode, Action Pan; Hold For Me, Wait Times, Direct My Call Live Translate
US price off-contract
$699 (128GB)
$999 (128GB)
$499 (128GB)
$600 (128GB)
UK price
699 (128GB)
999 (128GB)
449 (128GB)
599 (128GB)
Australia price
AU$1,199 (128GB)
AU$1,699 (128GB)
AU$749 (128GB)
AU$999 (128GB)
How we test phones
Every phone tested by CNET’s reviews team was actually used in the real world. We test a phone’s features, play games and take photos. We examine the display to see if it’s bright, sharp and vibrant. We analyze the design and build to see how it is to hold and whether it has an IP-rating for water-resistance. We push the processor’s performance to the extremes using both standardized benchmark tools like GeekBench and 3DMark, along with our own anecdotal observations navigating the interface, recording high-resolution videos and playing graphically intense games at high refresh rates.
All the cameras are tested in a variety of conditions from bright sunlight to dark indoor scenes. We try out special features like night mode and portrait mode and compare our findings against similarly priced competing phones. We also check out the battery life by using it daily as well as running a series of battery drain tests.
We take into account additional features like support for 5G, satellite connectivity, fingerprint and face sensors, stylus support, fast charging speeds, foldable displays among others that can be useful. And we of course balance all of this against the price to give you the verdict on whether that phone, whatever price it is, actually represents good value. While these tests may not always be reflected in CNET’s initial review, we conduct follow-up and long-term testing in most circumstances.
From the latest Harry Potter game Hogwarts Legacy to The Legend of Zelda: Tears of the Kingdom, now is a good time to get that video game you’ve been eyeing.
Day 2 of Amazon’s October Prime Day event is winding down, so if you want to grab a bargain, hit up Amazon or Walmart’s anti-Prime Day sale fast. Here are the best game deals we’ve seen across the Xbox, PlayStation 5 and Nintendo Switch gaming consoles.
Amazon’s October Prime Big Deal Days sale is heading into its final hours, but there’s still time to score incredible savings before they’re gone. And the best part is you don’t have to drop a ton of cash to get the most out of the sale. We’ve found plenty of deals still available that cost just $10 or less. For those of us on a tight budget, these are some of the best Prime Day bargains around.
Keep reading for our favorite deals for $10 or less across various product categories. You’ll find savings on tech and accessories, fitness gear, home and garden essentials and more. Keep in mind that most of these Prime Day deals end tonight, so be sure to grab them before they expire.
Nike has everything you need to step out in style year-round, and it’s currently running a big fall sale offering up to 60% off plus an additional 20% off select styles with code ULTIMATE. The popular sports brand carries a huge selection of hoodies and track suits, which are perfect for fall and winter as things start getting colder. Nike also has sneakers and all your gym and summer gear as well — in case you’re looking to stock up early for next year. It really doesn’t matter what your style is, you’re sure to find something that suits you.
Looking for more discounts? CNET has the best deals from Nike and many others, along with promo code offers — all updated and verified daily.
Welcome to CNET Coupons, the first stop before you shop, featuring a multitude of deals and discounts from top online retailers. Simply head over to our coupon page and type in your favorite store or brands to find all the deals available for the week.
I’m always on the lookout for travel accessories, because, well, I travel quite a bit. I work from home, so I take my work with me every now and then, carrying my phones, laptop, tablet and headphones. And because of all my gadgets, I also need accessories to make my journey easier. I’ve got a water-resistant backpack with several pockets to fit all my tech, a few power banks to keep everything charged and, of course, cables, cables and more cables.
My most recent addition though, which is currently on sale for Amazon’s October Prime Day event, has been a game-changer.
The ESR magnetic wallet is a slim MagSafe-compatible wallet that you can attach to the back of your iPhone, allowing you to carry up to three individual cards, like your driver’s license, ID and debit or credit card. I don’t always like to have my full-sized wallet when I travel — it’s got a lot going on, and I don’t always need access to all my cards, receipts, coupons and spare change.
Instead, I pull out a couple of my more important cards from my full-sized wallet, place them into my magnetic wallet and then attach it to the back of my iPhone 15 Pro Max (it works with the iPhone 12 and later).
It’s not just a wallet, though. The ESR magnetic wallet also works as a stand for your iPhone. If you unlatch the top half of the wallet, you can create a kickstand to prop up your phone and read the news, watch videos and scroll through social media. You can rotate the magnetic wallet to place your phone horizontally, which is great if you want to watch a movie on a flight or hop on a Zoom call in a coffee shop.
The ESR magnetic wallet also comes in a $46 version that features Find My functionality, so that you can see where it is in real-time in case you lose your wallet, phone or both.
Apple released the third iOS 17.1 public beta for the iPhone on Wednesday, one day after the company made the beta available to developers. The third beta arrived about three weeks after the release of iOS 17. This beta brings a few new features and bug fixes to the iPhones of developers and other beta testers who want to see what’s coming down the pike from Cupertino.
We recommend downloading a beta only on something other than your primary device. Since this is a beta version of iOS 17.1, these features might be buggy and battery life may be short, and it’s best to keep that on a secondary device.
If you’re a developer or public beta tester, here are some of the new features you can find in iOS 17.1 beta 3. Note that the beta is still ongoing, so these might not be the only new features to land on your iPhone when iOS 17.1 is released. There’s no word on the public release for iOS 17.1 just yet.
iPhone 12 radio frequency concerns addressed
The National Frequency Agency in France said on Sept. 12 that the iPhone 12 exceeds European-specific absorption rate limits, and it appears Apple will address those concerns with iOS 17.1.
“iOS 17.1 includes an update for iPhone 12 for users in France to accommodate this specific test protocol that requires reduced power when off-body on a static surface,” Apple posted Tuesday. “iPhone 12 will no longer increase the allowed power when the off-body state is detected, such as while it is sitting on a table.”
New StandBy mode settings
StandBy mode is one of my favorite new iOS features, and in iOS 17.1 beta 2, Apple gives StandBy mode more setting options. With iOS 17.1 beta 2, you have the option to turn StandBy mode off after 20 seconds, never or “Automatically.” Apple writes that if you choose Automatically, the display will turn off when your iPhone is not in use and the room is dark, like when you’re sleeping at night.
However, I checked these settings on my iPhone 14 Pro and iPhone XR and only found these options on my iPhone 14 Pro. This makes me think the new settings will only be available on iPhones with an always-on display.
Apple Music upgrades
In iOS 17.1 beta 1, Apple added a new button in Apple Music that allows you to quickly Favorite songs. When a song is playing and you’re looking at its card on your iPhone, there’s a star outline near the song’s title. You can tap this star to add the song to your Favorites.
There’s also a new way to find all your Favorited playlists, albums and songs. To find them, go into the corresponding category in Apple Music, tap the button in the top-right corner of your screen, and tap Favorited.
Apple Music also shows you song suggestions in iOS 17.1 beta 1. To see them, go into any of your playlists and scroll to the bottom of the playlist to see a section called Song Suggestions. These are songs that the app thinks you might like based on your musical tastes.
AirDrop using cellular data
With iOS 17, Apple upgraded AirDrop with NameDrop, which allows two devices to tap each other and exchange contact information — kind of like exchanging digital business cards. And in the first iOS 17.1 beta, Apple now lets you use cellular data to send and receive information over AirDrop when two iPhones are out of range of each other.
Flashlight symbol in Live Activities
Have you ever accidentally switched on your iPhone’s flashlight and had someone point it out to you later? Some iPhone users won’t have to worry about that anymore. In iOS 17.1 beta 1, when you turn on your flashlight, a little flashlight symbol appears in the Live Activities feed across the top of the screen of my iPhone 14 Pro. However, I couldn’t replicate this symbol on my iPhone XR, so this feature likely only works on Live Activity-enabled iPhones, like the iPhone 14 Pro and the iPhone 15 lineup.
New ringtones are back
When Apple released iOS 17, it included all-new ringtones and text tones. Apple then removed those sounds with iOS 17.1 beta 1, but the ringtones and text tones appear to be back with iOS 17.1 beta 2. You can still find all the older sounds under Classic on the Ringtone and Text Tone setting pages.
Those are some of the major new features developers and beta testers will see in the third iOS 17.1 beta. That doesn’t mean these are the only features coming to the next iOS update, or that these changes will stick when iOS 17.1 is released to the public.
For more, check out my review of iOS 17 and CNET’s iOS 17 cheat sheet.
17 Hidden iOS 17 Features and Settings on Your iPhone
With Amazon Prime Day underway, there are plenty of Nintendo Switch and other video game deals flooding the internet. And as awesome as the Nintendo Switch is for playing games like The Legend of Zelda: Tears of the Kingdom or the upcoming Super Mario Wonder, the system is now showing its age. Compared to the PS5 and Xbox Series X, the Nintendo Switch typically runs games at 1080p or below. This leads to a less sharp image on large 4K displays with lots of jagged edges, known as aliasing.
Luckily, there are some aftermarket products that can help mitigate this. The mClassic by Marseille is a small HDMI upscaler that bumps up the resolution of all your older systems. This means that a 720p image gets bumped up to 1080p, along with some anti-aliasing tech to smooth over edges.
Right now, the Marseille mClassic is on sale for $80. That’s $20 off the regular retail price. It’s a lightning deal, meaning that once it’s sold out, it’s gone.
I’ve been using the mClassic for years and have written about it in the past. While it won’t completely transform your Switch to produce PS5-level image quality, it does give it a modest visual bump. For some, the difference might be negligible. For others, especially pixel-peepers like myself, it cleans the round faces in Animal Crossing: New Horizons and sharpens the glow of Samus’ suit in Metroid Dread enough for me to notice.
It should be noted that some TVs have internal upscalers as well. For some, the mClassic might fail to make significant difference if their TV’s upscaler is already doing the heavy lifting.
If you’re hoping artificial intelligence will get your creative juices flowing faster, Adobe on Tuesday revealed three big changes to its Firefly family of generative AI tools. The engine that powers Photoshop’s image generation is getting a big upgrade, and generative AI is coming to Adobe Illustrator designs and Adobe Express layouts.
Firefly already let you turn text prompts into pictures on the Firefly website and in Adobe’s Photoshop image-editing software, but a second-generation AI model offers more detail and better image quality, said Alexandru Costin, Adobe’s generative AI leader. The model isn’t yet available in Photoshop, but you can expect it to arrive after some testing online.
In my testing, I did indeed find the results and the user interface much better. Adobe trained the new AI model on twice as many images, and it offers higher resolution, better detail like skin pores, and the ability to steer generation with photography parameters like lens focal length and depth of field.
Generative AI starts with the same basic AI methods that have been used for years: train a system to recognize patterns in real-world data. But generative AI goes a step further with the ability to create new material like text, images, speech or video based on its own understanding of those patterns. That’s revolutionized computers, lifting them out of their plodding, literal ways and giving us a taste of what a truly smart machine might be like.
Trying Adobe’s upgraded Firefly generative AI
Although Firefly results sometimes are unconvincing, it’s just plain fun to create fanciful and entertaining images, especially when using more forgiving art styles like paintings, cartoons and watercolors. I enjoyed typing in prompts, seeing what the AI would produce, then tweaking the prompts for more useful or outrageous results.
The interface is improved, too. A high-level option to choose between photorealism and a more artistic illustration style is helpful, as are options for square, landscape and portrait aspect ratios. Sliders for visual intensity and style strength give you a choice of images that are dramatic, understated or something in between. You can upload a reference photo to steer the style of the output.
One of my standard prompts to test AI, a parachuting elephant, produced better results than the first-gen Firefly, though the technology still struggles with the parachute cords. In one image, it managed to construct a wooden frame to hold the elephant, which surprised me.
Creating “a kindly doctor in a hospital room for a pharmaceutical ad” produced a variety of generally acceptable portraits. Firefly produced a variety of races and genders, but every one of them had the obligatory scrubs and stethoscope.
The images Firefly created of a spikey electric guitar were satisfyingly dangerous looking if not always endowed with the correct number of frets, pickups and strings.
My testing also showed there’s still a lot of work to be done. In images of the stereotypical hacker hunched over a keyboard, they sometimes wore their hoodies backwards. My quest for a photorealistic red crab waving its claws in the air went unfulfilled, with distorted claws, extra eyes and other problems. Firefly did a passable job generating a groundhog, but Adobe’s training data evidently doesn’t include enough picas — a more unusual rodent found in the high mountains.
And another test I like to run, an image of an angry crocodile leaping out of a stormy ocean with lightning striking all around, still struggles with a plausible arrangement of teeth. My “ghoul in a heavy metal outfit rides a mountain bike through a post-apocalyptic wasteland” prompt showed that it’s really hard to fit the geometry of a humanoid character to the mechanics of a bike. “A high-tech communications network spans the globe with data surging through the wires” produced a tangled mess.
But you can refine prompts to get better results. and the art style often produces more convincing subjects than photo style. Some art styles, like doodle drawing, are more forgiving than others. I particularly liked the watercolor option. But perfect Firefly is not.
Firefly comes to Adobe Illustrator
Adobe Illustrator, which designers use to create vector graphics like logos and diagrams, now gets Firefly text prompt abilities for the first time. As with Photoshop, the software will turn a text prompt into a quartet of illustration candidates you can choose from and refine with further editing. The illustrations are fully editable.
And the Adobe Express app for building creations like flyers and social media videos gets a text prompt field of its own that you can use to build templates. For example, if you type in “pirate themed birthday party announcement for children,” it’ll spit out some layout options with custom art and fonts. It’s in beta testing for now.
Read more: Here’s What I Learned Testing Photoshop’s New Generative AI Tool
Generative AI’s creativity can be a problem if you’re looking for factually accurate information for your lawsuit, travel itinerary or high school essay. But it can be a boon for creative uses, and Adobe is counting on Firefly to overhaul what’s possible with its tools and help those who might lack expertise to spread their wings.
Sneak previews of AI-powered projects
AI showed up on Wednesday, too, the second day of its Max conference, when Adobe showed several of its traditional “sneaks” — previews of technology under development it expects to ship eventually.
Ahead of Max, Adobe showed off Project Stardust, a new photo editing system that analyzes a scene, separating it into elements so people can tap on subjects to select, move, delete, resize or modify them. It even notices shadows and alters them accordingly. That kind of AI-powered ability could ease the notoriously difficult process of changing only the part of a photo you want to change.
“Stardust is a new editing engine that we hope will revolutionize photo editing for potentially millions of people,” said Stardust leader Mark Nichoson.
Here are some of Adobe’s other AI-powered sneaks:
Project Fast Fill brings generative AI to video, letting editors delete people from the background of one video frame and then extend that change to the entire video clip. It can also track a designated area so a change can follow subjects. Adobe showed one example adding a necktie to a walking person’s outfit and another with a new foam pattern on the sloshing surface of a cup of latte.
Project Poseable lets people convert a 2D photo of a person into a 3D model in that same posture — sitting, lying down or in a fighting stance, for example. You can then reposition the limbs and joints of that skeletal model into a preferred pose, then with a text prompt build a new character style, like ogre or nurse. Adobe thinks it’ll be good for storyboards, though for now it can’t maintain a consistent character from one creation to the next.
And Project Draw & Delight turns a crude sketch accompanied by a text label into a cartoon. You can add extra material with new sketches and prompts, and it’ll modify the existing design, preserving the character you’ve already chosen.
Firefly generates 3 billion images so far
In a few months of testing, Adobe customers have embraced AI rapidly, generating more than 3 billion creations so far, Costin said.
“Normally, features in Photoshop get a single percent utilization rate. Generative Fill [a Firefly Photoshop ability] got 10x that percentage in the first month,” Costin said. “Customers love this tech.”
Adobe’s bean counters also likely love it. Adobe subscription plans let you use Firefly many times per month, but in November, Adobe is raising Creative Cloud prices by about 9% to 10%.
The generation takes place on Adobe’s cloud computing infrastructure and costs real money, especially given the high price tag of Nvidia processors that handle most generative AI work these days. Those big AI models typically don’t fit in the memory of an ordinary laptop, but the industry is working on that problem at the same time companies like AMD, Intel and Apple are adding new acceleration abilities to their processors.
Other AI abilities emerging at Adobe Max:
Adobe also is using AI to help you use AI. In Photoshop, it’ll suggest completions to your text prompts to try to produce better results. This could help people who are short on experience in the latest field of computer science, prompt engineering. “If you don’t know what to write, we’re autogenerating prompts not only to statistically write a sentence but to try … to generate images that look beautiful,” Costin said.
In Lightroom, a new AI-based lens blur effect will artificially blur backgrounds, simulating the bokeh that higher-end lenses can produce naturally to isolate subjects from the rest of a scene.
Businesses can customize how Firefly works by uploading their own assets to steer the Firefly generation process in the right direction.
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.
Amazon may be offering some fantastic deals for its October Prime Day, but don’t forget that other retailers are also stepping up their game and offering their own sales. It’s good news if you prefer not to invest in Prime or if you have memberships at other major stores that offer significant discounts. If you’re looking for deals, you can check out Walmart, Target and Best Buy, and scroll through our favorite picks below.