Category: Technologies

  • Get Up to $257 Off a Cellular Stainless Steel Apple Watch Series 9, But Be Quick

    Get Up to $257 Off a Cellular Stainless Steel Apple Watch Series 9, But Be Quick

    The Apple Watch Series 9 is the latest model in a long line of popular Apple Watches. It may only be an incremental update over the Apple Watch Series 8, but it’s still one of the best smartwatches we’ve seen to date. It’s a gorgeous bit of kit when you choose the stainless steel option as well, but that really cranks the price up. The same goes for the useful cellular feature, too. Unless, that is, you manage to take advantage of this incredible deal that will get you a cellular stainless steel Apple Watch Series 9 for just $442 (a $257 savings).

    That price gets you the 41mm model in silver with a Storm Blue sport band, but you can get the larger 45mm model for $668. That deal isn’t quite as impressive, however. And these Apple Watch prices can fluctuate wildly at times, so we would suggest you place your order soon if you want to put one of these watches on your wrist at these prices.

    If you do, you’ll get an Apple Watch that still has the blood-oxygen-sensing feature that Apple has gotten itself into so much trouble over, while you’ll also benefit from the same array of health and fitness features that made the Apple Watch so popular in the first place. These wearables also sport lifesaving features like ECG capabilities and heart rate monitoring technology, not to mention fall detection and crash detection as well. And of course, you’ll have the option of cellular data so you can leave your iPhone at home, too. With all of that factored in, can you afford not to take advantage of these discounts?

    As far as smartwatch deals go, the 41mm discount of $257 really is a doozy, but it’s unlikely to stick around for long, so if you have your heart set on something in the lineup, be sure to check out our collection of the best Apple Watch deals.

  • Save on ‘Like New’ Refurbished iPhone 13, 14 and SE Handsets at Woot

    Save on ‘Like New’ Refurbished iPhone 13, 14 and SE Handsets at Woot

    Buying a new iPhone is an expensive process, but it’s easy to save money if you know where to look. Buying refurbished can be a good way to get the phone of your dreams at a price that won’t give you nightmares. And right now, Woot is offering some refurbished iPhones with prices starting at just $315. Models include everything from the 3rd-gen iPhone SE to the iPhone 14 Plus, so there should be something for everyone.

    The cheapest option is, of course, the $315 iPhone SE, which is offered in a range of colors and offers — the cheapest way to get into an iPhone here. But for those who want to get something with Apple’s excellent Face ID and a huge screen, it’s difficult to look beyond the iPhone 14 Plus. Woot’s selling the 128GB model in a choice of colors for just $750 right now, giving you a chance to pick up a phone with a huge 6.7-inch display and Apple’s speedy A15 Bionic chip.

    Other deals include an iPhone 13 Pro from $565 and an iPhone 13 Pro Max from $785 while a standard iPhone 14 can be had for $570, too.

    Woot says that its these iPhones are “in pristine like-new condition, with no visible scratches, dents, or dings.” If you’re in the market for a great iPhone deal but don’t want to buy new, you might just have found it. But remember that Woot’s sale comes to a close in a week, so consider placing that order soon if you want to be sure you won’t miss out.

  • Google ImageFX Review: A Fun, Free Starting Point to Try AI Image Generators

    Google ImageFX Review: A Fun, Free Starting Point to Try AI Image Generators

    Our Experts

    Written by

    Stephen Shankland
    Stephen Shankland Former Principal Writer
    Stephen Shankland worked at CNET from 1998 to 2024 and wrote about processors, digital photography, AI, quantum computing, computer science, materials science, supercomputers, drones, browsers, 3D printing, USB, and new computing technology in general. He has a soft spot in his heart for standards groups and I/O interfaces. His first big scoop was about radioactive cat poop.
    Expertise Processors, semiconductors, web browsers, quantum computing, supercomputers, AI, 3D printing, drones, computer science, physics, programming, materials science, USB, UWB, Android, digital photography, science. Credentials

    • Shankland covered the tech industry for more than 25 years and was a science writer for five years before that. He has deep expertise in microprocessors, digital photography, computer hardware and software, internet standards, web technology, and more.
    Why You Can Trust CNET
    16171819202122232425+

    Years of Experience

    14151617181920212223

    Hands-on Product Reviewers

    6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000

    Sq. Feet of Lab Space

    CNET’s expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.

    6.0/ 10
    SCORE

    Google ImageFX

    Pros

    • Free and good for experimentation
    • Can produce engaging images

    Cons

    • Overly cautious filters block innocuous images
    • Results often don’t look real
    • Limited to square aspect ratio

    Google is one of the powerhouses of artificial intelligence. It put AI to good use in its earlier days with tools like spam filtering, then pioneered the transformer technology that fueled the new generative AI movement, and now it’s leading the “multimodal” push that blends text, audio, photos and videos.

    But when it comes to turning text prompts to images, the company is a bit behind its rivals — at least judging by my testing of Google ImageFX, a free tool that uses its Imagen 2 model. I reviewed ImageFX alongside rivals OpenAI’s Dall-E 3 and Adobe Firefly, and ImageFX fared well in some areas, for example with photorealism and some more conceptual prompts like a lightbulb made out of spaghetti. But I also had lots of problems with distorted anatomy, results that didn’t produce what I wanted and, most annoyingly, innocuous prompts that were rejected because of Google’s overcautious nannying.

    Three AI-generated images of a light bulb drawn out of spaghetti strands.
    Three AI-generated images of a light bulb drawn out of spaghetti strands.

    That said, ImageFX is free, and it does show much of the potential of text-to-image generative AI. In comparison, Open AI’s Dall-E 3 costs $20 per month as part of a ChatGPT Plus subscription, and while Adobe Firefly can give you 25 images a month with a free account, you’ll need to subscribe to Creative Cloud and pay a monthly fee, starting at $5, for more images.

    Despite costing nothing for unlimited images, ImageFX outdid the other two several times in my testing, so you shouldn’t write it off entirely — especially if you’re using another service and it’s not getting you what you want. ImageFX might not be the tool of choice for people in the imaging business, but it’s a fine place to start your generative AI journey for imaging.

    Google, trying to sidestep some of the concerns about AI-generated fake images, uses a technology called SynthID to embed metadata directly into the image pixels that flag its AI origins. That’s harder to strip out than textual metadata.

    ImageFX is part of Google’s AI Test Kitchen. Google collects and stores data it collects from users, and human reviewers at Google will be able to read and process your interactions with ImageFX and other Test Kitchen tools. Google retains your interactions for up to 18 months. See Google’s Test Kitchen FAQ and main privacy policy for more details.

    Here’s a closer look at what I found with Google ImageFX.

    How CNET tests AI image generators

    CNET takes a practical approach to reviewing AI image generators. Our goal is to determine how good it is relative to the competition and which purposes it serves best. To do that we give the AI prompts based on real-world use cases, such as rendering in a particular style, combining elements into a single image and handling lengthier descriptions. We score the image generators on a 10-point scale that considers factors such as how well images match prompts, creativity of results and response speed. See how we test AI for more.

    How good are the images, and how well do they match prompts?

    Perhaps the most important ability of a text-to-image service is the ability to understand what you actually want to see and then construct an image with the right elements. It’s remarkable to see generative AI turn text prompts into imagery, but at this stage, you have to expect a lot of problems.

    An AI-generated image of a doctor with a malformed stethoscope

    ImageFX tries to prod you along the pathway of creative exploration by processing your text prompt and turning various words or phrases into “expressive chips” — drop-down menus you can tweak. That can be helpful for newcomers trying the technology, but the quality of visual results was inconsistent.

    Often, I had trouble generating realistic humans. Fingers and feet and limbs and faces were sometimes peculiar. When prompting for doctors, I got a pretty dour bunch of medical professionals — the opposite of Adobe Firefly skewing toward cheerfulness.

    Inanimate objects had problems too. Logos were styled appropriately as 2D illustrations but weren’t graphically convincing. Across more than a dozen attempts to show a doctor, the obligatory stethoscope was never convincingly rendered. It was like a medical instrument from a parallel universe. When I requested a monster truck jumping over a school bus, I got a monster school bus jumping over a truck.

    Like all the other AIs, ImageFX failed to count pool balls. Prompted for “There are six pool balls on the green felt of a pool table. A light above illuminates the scene,” ImageFX never gave me six. It sometimes added miniature balls, didn’t include the light above, and duplicated balls. Changing the query to show a single pool ball yielded a table with many.

    But I did get good results in some cases, which is why I say you shouldn’t overlook ImageFX. It did the best at all services I tested at rendering the facial emotion required for this prompt: “A product photo with a large collection of cleaning products in a shallow box. The cleaning products are in front of a person who is frustrated at how much work they have to do.”

    An AI-generated image of a yellow monster bus jumping over a truck.

    Over and over, ImageFX delighted me with its light bulbs made out of spaghetti. Generative AI can really be fun for wacky images like that. Its rendering of a fingernail clipper also surpassed rivals — not a common prompt, I’m sure, but a reasonable test of the breadth of its training data and presentation abilities.

    Inoffensive prompts rejected

    Many prompts were rejected for violating Google policies. I understand the risks of AI, and I’m glad Google is trying to reduce them, especially with a free tool. But the restrictions go overboard.

    Among various prompts that ImageFX rejected but that other AI tools accepted: “A crocodile leaps out of the water with lightning flashing all around it. Its jaws are open and you can see its jagged teeth.” “Logo for an independent coffee shop. The logo conveys a sense of vibrant energy. Bright colors contrast with traditional dark brown coffee colors.”

    On top of that, Google doesn’t tell you what triggered its rejection, so trying to get what you want involves lots of tedious trial and error.

    For one rejected prompt, “a zombie wearing heavy metal clothing rides a mountain bike through a post-apocalyptic urban landscape,” I figured the most likely culprit was that Google didn’t like the gory and thus violent connotations of zombies. Indeed, changing the subject to a rock star delivered reasonably good results (as long as you don’t look closely at the mountain bike mechanics).

    An AI-generated image of a man playing pickleball

    For other prompts, though, I couldn’t figure out what triggered the block. Sometimes tweaking the prompt worked, but then I’d try generating again and it would be rejected. That’s the frustrating experience that’ll drive people away.

    Rejection of anodyne prompts also was a problem for Google’s Gemini chatbot, which like ImageFX uses the Imagen 2 text-to-image AI model. Google shut down Gemini’s ability to render people after related problems, like the inability to generate images of Black people when requested. Google didn’t shut down ImageFX, which has a different text-processing system. (For example, Gemini can handle very long prompts that ImageFX can’t.)

    How engaging are the images?

    Generally, ImageFX produces engaging, eye-catching images. Its problems lie with the fundamental image elements, not with the flashiness of the presentation.

    ImageFX often would come up with a style it found appropriate, usually with good results in my tests. Logos were punchy. When I asked for a collection of antique scientific instruments, it presented them with the quiet, dusty vibe of a museum. Prompts for Napoleonic-era British Navy scenes produced images in the style of an 18th century etching or hand-painted illustration.

    An AI-generated image of a sea captain holding a brass sextant incorrectly

    Can you fine-tune results?

    As with all text-to-image AI services, a lot of getting what you want involves tweaking prompts, regenerating images with the same prompt and learning prompt techniques. ImageFX suggests styling options like 35mm film, photorealistic, watercolor, bokeh and painting that can help get you started.

    But I found fine tuning to be an ImageFX weak point. When I got unsatisfactory results, tweaking the prompt often didn’t fix anything.

    Aspect ratio was also limiting. Plenty of us want portrait or landscape orientation, but ImageFX delivers only square images.

    How fast do images arrive?

    Generative AI pushes computing technology to its limits, which means running an image generation service must balance cost with speed.

    Three AI-generated logos for a coffee shop, each of them slightly odd
    Three AI-generated logos for a coffee shop, each of them slightly odd

    ImageFX is reasonably fast most of the time, delivering results in 10 to 20 seconds. Sometimes I’d get impatient and switch away, though.

    At times, I had to click “generate” twice, because the first click seemed to succeed only in reconfiguring my prompt. Sometimes ImageFX failed after 20 seconds or so for mysterious reasons and tells you to try your prompt again. Sometimes after that wait, ImageFX just blipped out and erased my prompt as if I’d clicked its “start over” button.

    Conclusion

    ImageFX delivers on some of the promise of text-to-image AI, though results that were unreal or that didn’t match the prompt were a bit more common than with rivals. If you’ve never tried it, I suggest you give it a whirl. ImageFX has the right price and is a great place to fool around to get a feel for generative AI images.

    Google has a major AI effort, though, so expect to see improvements.

    Editors’ note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET’s other hands-on reviews, are written by our human team of in-house experts. For more, see CNET’s AI policy and How We Test AI.

    An AI-generated image of a soaring red-tailed hawk
  • Perplexity AI Review: Imagine ChatGPT with an Internet Connection

    Perplexity AI Review: Imagine ChatGPT with an Internet Connection

    Our Experts

    Written by

    Imad Khan
    Imad Khan Senior Reporter
    Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom’s Guide and Wired, among others.
    Expertise Google, Internet Culture
    Why You Can Trust CNET
    16171819202122232425+

    Years of Experience

    14151617181920212223

    Hands-on Product Reviewers

    6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000

    Sq. Feet of Lab Space

    CNET’s expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.

    perplexity-ai-7622

    7.0/ 10
    SCORE

    Perplexity AI

    Pros

    • Connected online
    • Pulls from Reddit

    Cons

    • Can hallucinate and give incorrect information
    • Inadequate at synthesizing information for difficult queries

    Basic info:

    • Price: Free
    • Availability: Web or mobile app
    • Features: Voice recognition, Reddit dataset
    • Image generation: No, paid version only

    Imagine if ChatGPT could pull answers from Reddit. That’s the best way to describe Perplexity AI, a conversational generative AI founded by Aravind Srinivas, a former research scientist at OpenAI, the creators of ChatGPT. Perplexity looks and feels a lot like ChatGPT 3.5, the free version of the popular AI chatbot, except it has a connection to the open internet. This means it not only pulls information from sites like Reddit and X (formerly known as Twitter) but links to them, too. ChatGPT 3.5, on the other hand, is limited to data collected up to September 2021 and can’t link to sources. It’s unclear whether ChatGPT uses Reddit or X as part of its training data.

    When it comes to shopping recommendations or general research, being able to see the source information is invaluable. Clicking on a Reddit link inside Perplexity allows you to see the full conversation thread between users, helping to get more context. Like Google Gemini, another freely available generative AI engine, Perplexity feels like a blend of AI chatbot and search engine. Perplexity does falter in research and synthesizing information at times, failing to hold its own against Anthropic’s Claude.

    How CNET tests AI chatbots

    CNET takes a practical approach to reviewing AI chatbots. Our goal is to determine how good an AI is relative to the competition and which purposes it serves best. To do that, we give the AI prompts based on real-world use cases, such as finding and modifying recipes, researching travel or writing emails. We score the chatbots on a 10-point scale that considers factors such as accuracy, creativity of responses, number of hallucinations and response speed. See How We Test AI for more.

    Perplexity collects data for AI improvement by default, but you can opt out by turning off the AI Data Usage toggle in Perplexity’s settings. For more information, see Perplexity’s Privacy Policy and data collection FAQ.

    Shopping

    Generally, when trying to decide between buying two very similar products, it helps to get some opinions that can demarcate key differences to make the final choice easier. This is why people turn to reviewers or forum threads to synthesize varying sets of opinions.

    An AI Chatbot should do a good job of summarizing all that back-and-forth so that you don’t have to read through paragraphs of text.

    While Perplexity does look to sources like Rtings, Tom’s Guide and WhatHiFi when asking which TV to buy between the LG OLEDs C3 and G3, it doesn’t do a great job of parsing the finer details to give you better context.

    For example, when I asked Perplexity to choose between the LG’s top OLEDs, it recommended buying the more expensive G3 if your budget allows it. It’s a totally fair conclusion, but fails to make a convincing argument. It justifies paying nearly an extra grand for the G3 because it’s 70% brighter compared to older OLED TVs. But Perplexity doesn’t specify which older OLEDs it’s comparing the G3 to. While the G3 does have a brighter panel, CNET’s TV expert David Katzmaier notes in his LG OLED C3 review that the G3 doesn’t surpass it by leaps and bounds. It’s why both the C3 and G3 sit on our best TVs of 2024 list.

    A more nuanced take would be that the G3 is overall the better television in terms of both picture quality and brightness, but it might be difficult to justify spending nearly $1,000 more for it for most people, especially those jumping into the world of OLED TVs for the first time.

    On the LG OLED subreddit, many TV shoppers ask if it’s better to buy a 65-inch LG OLED G3 or spend the equivalent amount of cash for a 77-inch LG OLED C3, instead. The consensus generally is that bigger is better. When posed the same question, Perplexity too sourced Reddit for inspiration and came away with the same conclusion. Katzmaier agrees that this is always the better choice.

    Oddly, when asked to compare the older 2019 LG C9 OLED and the 2023 LG OLED C3 (it’s confusing, I know), Perplexity started to hallucinate. At first, it just did a comparison between the C3 and G3. When pressed to specifically compare the C3 to the C9, then it started giving incorrect information, such as the C3’s inclusion of MLA technology for higher brightness. In reality, MLA is currently only available in the higher-end G3 and M3 models.

    All-in-all, Copilot (in creative mode) and Claude performed the best, giving both precise information and relatable buying advice. Perplexity performed on par with Google Gemini. Since ChatGPT 3.5’s training data is only inclusive up to September 2021, it couldn’t be used for this specific shopping comparison.

    Recipes

    AI could very well upend the online recipe world. Where many recipes online feature long dissertations of eating Mom’s Sunday dinner, this is often done to appease Google’s Search Engine Optimization, or SEO. It’s why many online articles feature question-marked subheads that restate common search queries.

    All that added text is so that Google can “crawl” these recipe sites and figure out which ones should filter to the top. But for readers, it can mean lots of unnecessary text.

    AI doesn’t need to write for Google. It aims to generate succinct answers to pretty much any question. Plus, Perplexity AI really can’t recall eating Grandma’s apple pie in the first place.

    When asking Perplexity to generate a marinade for chicken tikka masala, it created a middling recipe overall. It had ingredients like ginger and garlic paste, ground cumin and turmeric, but was missing things like chili powder. Granted, not all recipes call for chili powder, but it is an odd exclusion. When asked again, Perplexity generated a recipe that did include both red chili powder and red chili paste. This echoed similar results to ChatGPT 3.5. Only Google Gemini produced recipes that included more exotic ingredients like kasuri methi (dried fenugreek), chaat masala and amchur (dried mango powder).

    Research and accuracy

    Perplexity AI’s biggest strength over ChatGPT 3.5 is its ability to link to actual sources of information. Where ChatGPT might only recommend what to search for online, Perplexity doesn’t require that back-and-forth fiddling.

    When asking for studies about how or if homeschooling affects neuroplasticity, Perplexity did a decent job of linking to some papers that could be helpful. While none of the studies cited made direct links to how homeschooling might affect young minds, it did look at papers about home-based motor learning and other general information.

    Perplexity, oddly, did cite a nonscholarly source from what looks to be a homeschool advocacy website. Obviously, the information here isn’t an objective analysis, and instead leans more on why, from a religious perspective, it might be better to school kids at home.

    Unlike Claude and Copilot, Perplexity failed to synthesize information from sources. It’s one thing to point to pieces of information like a search engine, it’s another thing entirely to start making connections between two sets of research. Perplexity also stated that the pieces of research cited definitively proved the benefits of homeschooling for childhood brain development, which isn’t quite the case. At least Perplexity didn’t hallucinate in the same ways that ChatGPT 3.5 or Google Gemini did.

    A slight edge here goes to Claude, followed closely by Copilot.

    Summarizing

    Don’t turn to Perplexity to summarize articles. While the AI engine can get the basic gist of the article, it fails to grab the central crux or argument.

    I asked Perplexity to summarize a feature I wrote during CES earlier this year. Like Google Gemini, it’s possible to just paste a link to the article and Perplexity will generate a bare-bones summary. It generated more detail than Gemini, but not by much.

    In Gemini, when copy-pasting the text of the entire article, it did a much better job of summarization. When attempting the same test in Perplexity, it oddly generated the exact same response as when I input the website link. Still, at least it didn’t have a character limit like ChatGPT 3.5. This does make it more useful, but without calling on key points or pulling quotes from experts I spoke to, Perplexity doesn’t do enough to give users a well-rounded understanding.

    Claude and Copilot performed the best, generating an adequate summary, but still glossing over the main crux of the piece.

    Travel

    Major cities around the world have guidebooks, influencers and websites dedicated to showcasing their best sights and eats. Smaller midwestern cities don’t have that same privilege. Turning to AI for recommendations on what to do in Columbus, Ohio, for example, could prove to be handy. Compared to Google Gemini and ChatGPT 3.5, Perplexity passed this test with decent marks.

    For a three-day travel itinerary to Columbus, Perplexity made solid recommendations to visit sites like the Franklin Park Conservatory or the Columbus Zoo and Aquarium. Weirdly, neither Google Gemini nor ChatGPT 3.5 recommended the Columbus Zoo, which happens to be one of the largest zoos in the US.

    Where Perplexity faltered was in food recommendations. Apart from Day 1, it didn’t suggest any specific places to try, instead vaguely stating to dine at “one of the local ethnic restaurants.” ChatGPT 3.5, by comparison, made strong restaurant recommendations. At least Perplexity didn’t hallucinate in the same way Gemini did by making up restaurants that didn’t exist.

    Copilot performed the best, followed by Claude. Copilot cleanly laid out a list, with pictures and emojis, making it easy to follow.

    Writing emails

    Writing routine emails to bosses or colleagues is a great way to use AI. When drafting an email asking for time off from work, Perplexity performed better than ChatGPT and about on-par with Google Gemini. Perplexity’s formal and informal-sounding emails came off as earnest and very humanlike.

    By comparison, Gemini’s formal-sounding email wasn’t totally usable, as it asks you to insert your company’s floating holiday policy. I suspect most people don’t copy-paste blocks of text from the employee handbook when asking for time off.

    When it came to writing more complicated emails about difficult topics that delve into morality, capitalism and the role of consent, Perplexity made a decent outline, but wasn’t good enough in crafting an email that would sell as something crafted by a human. The language was robotic, lacking creative uses of language to help the reader see the image or argument being conveyed. It also leaned into cliched language that, at best, might pass in a high school English class.

    While Perplexity did use some multisyllabic words, it ultimately came off as vacuous. Don’t ask Perplexity to write a pitch to your film script. It’ll definitely fall flat in front of movie executives.

    Claude performed the best in this task, being able to juggle complexities or other moral qualms in a manner that came across as human. ChatGPT and Gemini did a decent job, but language was a bit too robotic and likely wouldn’t pass editorial muster.

    Strangely, Copilot refused to answer questions about sensitive topics.

    Perplexity flies where ChatGPT falls

    I give Perplexity AI credit. It delivers a compelling generative AI experience that can compete against the biggest names in tech like Google and Microsoft. Perplexity’s use of the open web and its ability to pull from social media sites like Reddit and X give it context and talking points missing in ChatGPT. (OpenAI hasn’t confirmed what data ChatGPT pulls from, but I suspect it doesn’t heavily rely on Reddit or X).

    Should Perplexity be your default free generative AI platform? Maybe. I’d certainly recommend it over Google Gemini and ChatGPT 3.5. But, I think it might have a tough time competing with Claude. While both Perplexity and Claude use GPT 3.5, Claude feels better tuned to give more nuanced answers with greater informational synthesis. Still, what the team has put together at Perplexity is worthy of praise.

    As good as Perplexity is, it’s hard to recommend it over Claude or Copilot. The latter two are better tuned to give nuanced answers with greater informational synthesis.

    Editors’ note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET’s other hands-on reviews, are written by our human team of in-house experts. For more, see CNET’s AI policy and How We Test AI.

  • Microsoft Copilot Chatbot Review: Bing Is My Default Search Engine Now

    Microsoft Copilot Chatbot Review: Bing Is My Default Search Engine Now

    Our Experts

    Written by

    Imad Khan
    Imad Khan Senior Reporter
    Imad is a senior reporter covering Google and internet culture. Hailing from Texas, Imad started his journalism career in 2013 and has amassed bylines with The New York Times, The Washington Post, ESPN, Tom’s Guide and Wired, among others.
    Expertise Google, Internet Culture
    Why You Can Trust CNET
    16171819202122232425+

    Years of Experience

    14151617181920212223

    Hands-on Product Reviewers

    6,0007,0008,0009,00010,00011,00012,00013,00014,00015,000

    Sq. Feet of Lab Space

    CNET’s expert staff reviews and rates dozens of new products and services each month, building on more than a quarter century of expertise.

    7.0/ 10
    SCORE

    Microsoft Copilot

    Pros

    • Uses GPT-4 and GPT-4 Turbo
    • Free
    • Accurately links to relevant information
    • Includes emojis and pictures in responses

    Cons

    • While prettier, not as cleanly organized as ChatGPT and Claude
    • Jumping between different modes requires an entirely new search
    • Can avoid making definitive statements
    • Refuses to answer prompts deemed controversial

    Basic info:

    • Price: Free
    • Availability: Web, Windows 11 or mobile app
    • Features: Voice recognition, connection to open internet and Bing, ability to tune answers to either more creative or precise
    • Image generation: Yes

    For Microsoft search engineers, there’s probably no higher praise than telling them you’ve switched your default search engine from Google to Bing. Sure, it took a multibillion-dollar investment from Microsoft to integrate OpenAI’s GPT-4 tech into its engine. But when Bing is operating at 3.3% global market share, compared to Google’s 91.6%, drastic measures have to be taken.

    The thing is, I’m not really using Bing. I’m actually using Copilot, Microsoft’s renamed AI chatbot that’s a part of Bing.

    What makes Copilot unique is that it’s essentially three GPT engines in one. Copilot has three modes: balanced, precise and creative. As of this review, the balanced and precise modes are using GPT-4, a model by OpenAI, creator of ChatGPT, that reportedly has over 1 trillion parameters. That’s substantially more than ChatGPT 3.5, which has 175 billion. Creative, however, is using GPT-4 Turbo, which uses data up until April 2023, as opposed to September 2021 in GPT-4. It can also give substantially larger responses, the equivalent of 300 pages of text. It’s uncertain when Microsoft will bring the power of GPT-4 Turbo to Copilot’s balanced and precise modes.

    Copilot is the best of both ChatGPT and Google’s Gemini. It has the accuracy and fine-tuning of ChatGPT with the internet connectivity found with Gemini. This means that answers read more like a human and it can pull up-to-date information from the internet. Really, Copilot delivers such good results it’s a wonder why Microsoft isn’t charging for it.

    While Copilot can generate images, we won’t be testing that feature for the purposes of this review.

    How CNET tests AI chatbots

    CNET takes a practical approach to reviewing AI chatbots. Our goal is to determine how good it is relative to the competition and which purposes it serves best. To do that, we give the AI prompts based on real-world use cases, such as finding and modifying recipes, researching travel or writing emails. We score the chatbots on a 10-point scale that considers factors such as accuracy, creativity of responses, number of hallucinations and response speed. See how we test AI for more.

    Do note that Microsoft does collect data when using Copilot, and this includes Copilot integrations in Word, PowerPoint, Excel, OneNote, Loop and Whiteboard.

    Shopping

    As a hot sauce aficionado, I’ve been following the recent drama surrounding Huy Fong Foods, the purveyors of the iconic red sriracha sauce, and how the flavor has changed since its hiatus and recent return. Turns out, there’s been an ongoing dispute with its original jalapeño supplier and Huy Fong Foods now sources chilis from Mexico. To add another wrinkle in this saga, Underwood Ranches, the original jalapeño supplier, has entered the market with its own sriracha sauce.

    I asked Copilot if it could help describe the differences I should expect between the new sriracha from Huy Fong and the copycat from Underwood Ranches. Copilot excelled in giving a full breakdown with specific language and even gave a quick summary of the ongoing corporate drama.

    Copilot described Huy Fong’s sriracha as more garlicky, with sweeter notes and less spice than before, whereas Underwood Ranches has added kick and is more reminiscent of the old sriracha. This description fell in line with other testimonies I’ve seen on YouTube and Reddit.

    Unlike Gemini and ChatGPT 3.5, Copilot gave specific descriptors and laid the information out in a manner that was easier to follow.

    Beyond sriracha sauces, I’ve also been in the market for a new TV. In comparing last year’s LG OLED C3 and G3 models, Copilot did a good job breaking down the differences and explaining which one would be the better buy. It got the key details right, like the fact that both televisions use the same processor and that the G3 gets brighter. However, it didn’t make the kinds of definitive arguments that Gemini did when prompted with the same question.

    But when I asked the same question in Copilot’s “creative” mode, which utilizes GPT-4 Turbo, it provided answers that felt more thought out, rather than a string of boilerplate bullet points. Here, Copilot put together cogent thoughts on brightness, design and performance, with a concluding paragraph explaining that, for most people, the increased brightness won’t be noticeable on the more expensive G3.

    Copilot in “creative” mode felt most like Claude. Information was better synthesized and did feel like it was put together by a real person. Gemini and Perplexity performed similarly, with sharp descriptions and little fence-sitting. While all the AI chatbots performed well, I’d have to give the edge to Copilot and Claude.

    ChatGPT 3.5 currently can’t make these types of shopping comparisons, as its training data is only up to September 2021.

    Recipes

    Sometimes finding a good recipe online can be a chore. Popular dishes can vary wildly, making it difficult to find the best one. Plus, having to scroll through long-winded preambles about memorable flavors of yore can get tiresome. An AI can filter through all the fluff and generate recipes in an instant.

    Copilot did a decent job of generating a chicken tikka recipe in creative mode. It got the basic ingredients down, as well as a list of instructions on how to prepare the mix. However, it left out harder-to-find ingredients, ones that Gemini did capture, like Kashmiri chili powder, chaat masala and amchur, a dried mango powder.

    I was curious what answer Copilot would yield if switching to precise mode. Interestingly, it included mustard powder, which isn’t as common, and kasuri meti, or dried fenugreek.

    Given Copilot’s trifurcated nature, you might need to weigh which mode within Copilot might yield the best answer. Just because creative uses GPT-4 Turbo doesn’t mean it’ll give the best result to all queries.

    Overall, Google Gemini performed best in this test, providing the most robust recipe. This was followed by Copilot in precise mode. ChatGPT 3.5, Perplexity and Claude all performed similarly, with very basic recipes.

    Research

    The power of AI in doing research is that the model can look at multiple pieces of information and help find linking points in seconds. Normally, this would require you having to read through research papers yourself to make these sorts of connections. Copilot not only does this well, but links to sources, too.

    Copilot gets excellent marks as a research tool. When I asked Copilot about the relationship between homeschooling and neuroplasticity, it pulled up research papers related to childhood education and brain development, and it even linked directly to PDF files containing the research.

    I then switched to creative mode and got an even better response, with Copilot synthesizing additional sources and giving more nuanced answers. It felt as if Copilot had a greater understanding of the topic and the complexities different schooling environments present.

    Copilot in creative mode and Claude performed similarly in this test, and beat out Gemini, ChatGPT 3.5 and Perplexity. And unlike Gemini, all of Copilot’s responses were real. It didn’t make up the names of research papers in the way that Gemini did.

    While ChatGPT 3.5 was also accurate in recommending and summarizing research papers, it isn’t connected to the open internet, so it can only recommend you go to Google and search for it yourself.

    Summarizing articles

    Copilot does a decent job of summarizing articles, but like all the other AI chatbots we’ve tested, they continually fail to capture the central focus.

    Copilot, like Gemini, ChatGPT 3.5, Perplexity and Claude, were able to capture the basic points of an article I wrote earlier this year about AI at CES 2024. But all seemed to be unable to pinpoint the major crux of the piece: That a lot of AI hype is a rebranding of older smart tech.

    Can Copilot give you a good rundown of an article in a pinch? Sure. Should you rely on article summaries for a class presentation? Probably not.

    Travel

    The internet is glutted with travel recommendations. From blogs, travel guide publishers, TikTokers and YouTubers, so many people are trying to fill you in on the best sites and eats in iconic cities like Paris or London. But what about Columbus, Ohio? This is where AI can come into play with its ability to glean data from across the web and synthesize information about lesser traveled locations.

    When I asked Copilot for a three-day travel itinerary to Columbus, it performed spectacularly well in putting together recommendations for locations and restaurants in a bullet-pointed, easy-to-understand format. We cross-referenced Copilot’s results with CNET’s Bella Czajkowski, who hails from Cowtown. Copilot also did a great job weaving in bonus recommendations, something ChatGPT 3.5 and Gemini neglected to do.

    All the restaurants Copilot recommended were real. It didn’t make up restaurants like Google Gemini did. And I have to hand it to the Microsoft team for coding Copilot to also bake emoji into responses. It adds that slight hint of personality and makes following a lengthy set of travel recommendations easier to follow. For example, if you want to pinpoint the bar recs, look for the beer emoji.

    Compared to the AI bots tested, Copilot outperformed them all. Copilot made recommendations to locales and restaurants, all of which exist and are still open, producing articulate and accurate results with easy-to-follow language and structure. ChatGPT performed adequately, despite it not being connected to the open internet.

    Writing emails

    Like every other chatbot tested, Copilot performs great in writing basic emails. You can easily ask Copilot to tune an email to be more or less formal. Regardless of the tone you go with, emails read as believable.

    When asking Copilot to create an article pitch on racier topics, however, like the increased sexualization of online content creators and the ongoing changes in parasocial relationships with fans across the internet, Microsoft’s AI engine refused to engage in discussions about explicit content or the moral and ethical qualms related to it.

    All the other AI chatbots were able to take on this task. Claude performed the best, creating a pitch that was compelling and written well enough to be passed off as human-made.

    Better than ChatGPT, Gemini or Perplexity

    Copilot is versatile and can generate responses to be creative or precise, something the other AI chatbots can’t do unless prompted to. The way Copilot presents information, often with bullet points and emojis, makes it easy to read. It’s also accurate, linking to actual pieces of news and information and showed no instances of hallucinations, at least in our testing.

    While Copilot doesn’t have Claude’s personality, it usually performs at or beyond it, given the task. Microsoft, however, has seemingly put high guardrails on Copilot, which means that it’ll refuse to answer dicier questions, even if the use is legitimate.

    Microsoft Copilot is excellent. And it should be, right? It’s powered by GPT-4 and GPT-4 Turbo, and has access to Bing’s search data to help bolster its generative capabilities. Gaining access to GPT-4 tech with ChatGPT requires a $20 monthly subscription. My recommendation: Don’t pay $20 per month when Microsoft is giving away OpenAI’s tech for free.

    Editor’s note: CNET is using an AI engine to help create a handful of stories. Reviews of AI products like this, just like CNET’s other hands-on reviews, are written by our human team of in-house experts. For more, see CNET’s AI policy and how we test AI.

  • ChatGPT: A Change in How You Use It, and Everything Else to Know

    ChatGPT: A Change in How You Use It, and Everything Else to Know

    In late 2022, OpenAI wowed the world when it introduced ChatGPT and showed us a chatbot with an entirely new level of power, breadth and usefulness, thanks to the generative AI technology behind it. Since then, ChatGPT has continued to evolve, including its most recent development: Easy access for everyone.

    ChatGPT and generative AI aren’t a surprise anymore, but keeping track of what they can do can be a challenge as new abilities arrive. Most notably, OpenAI now lets anyone write custom AI apps called GPTs and share them on its own app store, while on a smaller scale ChatGPT can now speak its responses to you. OpenAI has been leading the generative AI charge, but it’s hotly pursued by Microsoft, Google and startups far and wide.

    Generative AI still hasn’t shaken a core problem — it makes up information that sounds plausible but isn’t necessarily correct. But there’s no denying AI has fired the imaginations of computer scientists, loosened the purse strings of venture capitalists and caught the attention of everyone from teachers to doctors to artists and more, all wondering how AI will change their work and their lives.

    If you’re trying to get a handle on ChatGPT, this FAQ is for you. Here’s a look at what’s up.

    What is ChatGPT?

    ChatGPT is an online chatbot that responds to “prompts” — text requests that you type. ChatGPT has countless uses. You can request relationship advice, a summarized history of punk rock or an explanation of the ocean’s tides. It’s particularly good at writing software, and it can also handle some other technical tasks, like creating 3D models.

    ChatGPT is called a generative AI because it generates these responses on its own. But it can also display more overtly creative output like screenplays, poetry, jokes and student essays. That’s one of the abilities that really caught people’s attention.

    Much of AI has been focused on specific tasks, but ChatGPT is a general-purpose tool. This puts it more into a category like a search engine.

    That breadth makes it powerful but also hard to fully control. OpenAI has many mechanisms in place to try to screen out abuse and other problems, but there’s an active cat-and-mouse game afoot by researchers and others who try to get ChatGPT to do things like offer bomb-making recipes.

    ChatGPT really blew people’s minds when it began passing tests. For example, AnsibleHealth researchers reported in 2023 that “ChatGPT performed at or near the passing threshold” for the United States Medical Licensing Exam, suggesting that AI chatbots “may have the potential to assist with medical education, and potentially, clinical decision-making.”

    We’re a long way from fully fledged doctor-bots you can trust, but the computing industry is investing billions of dollars to solve the problems and expand AI into new domains like visual data too. OpenAI is among those at the vanguard. So strap in, because the AI journey is going to be a sometimes terrifying, sometimes exciting thrill.

    What’s ChatGPT’s origin?

    Artificial intelligence algorithms had been ticking away for years before ChatGPT arrived. These systems were a big departure from traditional programming, which follows a rigid if-this-then-that approach. AI, in contrast, is trained to spot patterns in complex real-world data. AI has been busy for more than a decade screening out spam, identifying our friends in photos, recommending videos and translating our Alexa voice commands into computerese.

    A Google technology called transformers helped propel AI to a new level, leading to a type of AI called a large language model, or LLM. These AIs are trained on enormous quantities of text, including material like books, blog posts, forum comments and news articles. The training process internalizes the relationships between words, letting chatbots process input text and then generate what it believes to be appropriate output text.

    A second phase of building an LLM is called reinforcement learning through human feedback, or RLHF. That’s when people review the chatbot’s responses and steer it toward good answers or away from bad ones. That significantly alters the tool’s behavior and is one important mechanism for trying to stop abuse.

    OpenAI’s LLM is called GPT, which stands for “generative pretrained transformer.” Training a new model is expensive and time consuming, typically taking weeks and requiring a data center packed with thousands of expensive AI acceleration processors. OpenAI’s latest LLM is called GPT-4 Turbo. Other LLMs include Google’s Gemini (formerly called Bard), Anthropic’s Claude and Meta’s Llama.

    ChatGPT is an interface that lets you easily prompt GPT for responses. When it arrived as a free tool in November 2022, its use exploded far beyond what OpenAI expected.

    When OpenAI launched ChatGPT, the company didn’t even see it as a product. It was supposed to be a mere “research preview,” a test that could draw some feedback from a broader audience, said ChatGPT product leader Nick Turley. Instead, it went viral, and OpenAI scrambled to just keep the service up and running under the demand.

    “It was surreal,” Turley said. “There was something about that release that just struck a nerve with folks in a way that we certainly did not expect. I remember distinctly coming back the day after we launched and looking at dashboards and thinking, something’s broken, this couldn’t be real, because we really didn’t make a very big deal out of this launch.”

    An OpenAI lapel pin with the company's logo and the word

    How do I use ChatGPT?

    The ChatGPT website is the most obvious method. Open it up, select the LLM version you want from the drop-down menu in the upper left corner, and type in a query.

    As of April 1, ChatGPT is allowing consumers to use the service without signing up for an account first. According to a blog post, the move was meant to make the tool more accessible. It blocked prompts and generations in a wider range of categories at the same time.

    However, users with accounts will be able to do more with the tool, such as save and review their history, share conversations and tap into features like voice conversations and custom instructions.

    OpenAI in 2023 released a ChatGPT app for iPhones and for Android phones. In February, ChatGPT for Apple Vision Pro arrived, too, adding the chatbot’s abilities to the “spatial computing” headset. Be careful to look for the genuine article, because other developers can create their own chatbot apps that link to OpenAI’s GPT.

    In January, OpenAI opened its GPT Store, a collection of custom AI apps that focus ChatGPT’s all-purpose design to specific jobs. A lot more on that later, but in addition to finding them through the store you can invoke them with the @ symbol in a prompt, the way you might tag a friend on Instagram.

    Microsoft uses GPT for its Bing search engine, which means you can also try out ChatGPT there.

    ChatGPT is sprouting up in various hardware devices, including Volkswagen EVs, Humane’s voice-controlled AI pin and the squarish Rabbit R1 device.

    How much does ChatGPT cost?

    It’s free, though you have to set up an account to take advantage of all of its features.

    For more capability, there’s also a subscription called ChatGPT Plus that costs $20 per month that offers a variety of advantages: It responds faster, particularly during busy times when the free version is slow or sometimes tells you to try again later. It also offers access to newer AI models, including GPT-4. The free ChatGPT uses the older GPT-3.5, which doesn’t do as well on OpenAI’s benchmark tests but which is faster to respond. The newest variation, GPT-4 Turbo, arrived in late 2023 with more up-to-date responses and an ability to ingest and output larger blocks of text.

    ChatGPT is growing beyond its language roots. With ChatGPT Plus, you can upload images, for example, to ask what type of mushroom is in a photo.

    Perhaps most importantly, ChatGPT Plus lets you use GPTs.

    What are these GPTs?

    GPTs are custom versions of ChatGPT from OpenAI, its business partners and thousands of third-party developers who created their own GPTs.

    Sometimes when people encounter ChatGPT, they don’t know where to start. OpenAI calls it the “empty box problem.” Discovering that led the company to find a way to narrow down the choices, Turley said.

    “People really benefit from the packaging of a use case — here’s a very specific thing that I can do with ChatGPT,” like travel planning, cooking help or an interactive, step-by-step tool to build a website, Turley said.

    OpenAI CEO Sam Altman stands in front of a black screen that shows the term

    Think of GPTs as OpenAI trying to make the general-purpose power of ChatGPT more refined the same way smartphones have a wealth of specific tools. (And think of GPTs as OpenAI’s attempt to take control over how we find, use and pay for these apps, much like Apple has a commanding role over iPhones through its App Store.)

    What GPTs are available now?

    OpenAI’s GPT store now offers millions of GPTs, though as with smartphone apps, you’ll probably not be interested in most of them. A range of GPT custom apps are available, including AllTrails personal trail recommendations, a Khan Academy programming tutor, a Canva design tool, a book recommender, a fitness trainer, the laundry buddy clothes washing label decoder, a music theory instructor, a haiku writer and the Pearl for Pets for vet advice bot.

    One person excited by GPTs is Daniel Kivatinos, co-founder of financial services company JustPaid. His team is building a GPT designed to take a spreadsheet of financial data as input and then let executives ask questions. How fast is a startup going through the money investors gave it? Why did that employee just file a $6,000 travel expense?

    JustPaid hopes that GPTs will eventually be powerful enough to accept connections to bank accounts and financial software, which would mean a more powerful tool. For now, the developers are focusing on guardrails to avoid problems like hallucinations — those answers that sound plausible but are actually wrong — or making sure the GPT is answering based on the users’ data, not on some general information in its AI model, Kivatinos said.

    Anyone can create a GPT, at least in principle. OpenAI’s GPT editor walks you through the process with a series of prompts. Just like the regular ChatGPT, your ability to craft the right prompt will generate better results.

    Another notable difference from regular ChatGPT: GPTs let you upload extra data that’s relevant to your particular GPT, like a collection of essays or a writing style guide.

    Some of the GPTs draw on OpenAI’s Dall-E tool for turning text into images, which can be useful and entertaining. For example, there is a coloring book picture creator, a logo generator and a tool that turns text prompts into diagrams like company org charts. OpenAI calls Dall-E a GPT.

    How up to date is ChatGPT?

    Not very, and that can be a problem. For example, a Bing search using ChatGPT to process results said OpenAI hadn’t yet released its ChatGPT Android app. Search results from traditional search engines can help to “ground” AI results, and indeed that’s part of the Microsoft-OpenAI partnership that can tweak ChatGPT Plus results.

    GPT-4 Turbo, announced in November, is trained on data up through April 2023. But it’s nothing like a search engine whose bots crawl news sites many times a day for the latest information.

    Can you trust ChatGPT responses?

    Sadly, no. Well, sometimes, sure, but you need to be wary.

    Large language models work by stringing words together, one after another, based on what’s probable each step of the way. But it turns out that LLM’s generative AI works better and sounds more natural with a little spice of randomness added to the word selection recipe. That’s the basic statistical nature that underlies the criticism that LLMs are mere “stochastic parrots” rather than sophisticated systems that in some way understand the world’s complexity.

    The result of this system, combined with the steering influence of the human training, is an AI that produces results that sound plausible but that aren’t necessarily true. ChatGPT does better with information that’s well represented in training data and undisputed — for instance, red traffic signals mean stop, Plato was a philosopher who wrote the Allegory of the Cave, an Alaskan earthquake in 1964 was the largest in US history at magnitude 9.2.

    ChatGPT response asking about tips for writing good prompts

    When facts are more sparsely documented, controversial or off the beaten track of human knowledge, LLMs don’t work as well. Unfortunately, they sometimes produce incorrect answers with a convincing, authoritative voice. That’s what tripped up a lawyer who used ChatGPT to bolster his legal case only to be reprimanded when it emerged he used ChatGPT fabricated some cases that appeared to support his arguments. “I did not comprehend that ChatGPT could fabricate cases,” he said, according to The New York Times.

    Such fabrications are called hallucinations in the AI business.

    That means when you’re using ChatGPT, it’s best to double check facts elsewhere.

    But there are plenty of creative uses for ChatGPT that don’t require strictly factual results.

    Want to use ChatGPT to draft a cover letter for a job hunt or give you ideas for a themed birthday party? No problem. Looking for hotel suggestions in Bangladesh? ChatGPT can give useful travel itineraries, but confirm the results before booking anything.

    Is the hallucination problem getting better?

    Yes, but we haven’t seen a breakthrough.

    “Hallucinations are a fundamental limitation of the way that these models work today,” Turley said. LLMs just predict the next word in a response, over and over, “which means that they return things that are likely to be true, which is not always the same as things that are true,” Turley said.

    But OpenAI has been making gradual progress. “With nearly every model update, we’ve gotten a little bit better on making the model both more factual and more self aware about what it does and doesn’t know,” Turley said. “If you compare ChatGPT now to the original ChatGPT, it’s much better at saying, ‘I don’t know that’ or ‘I can’t help you with that’ versus making something up.”

    Hallucinations are so much a part of the zeitgeist that Dictionary.com touted it as a new word it added to its dictionary in 2023.

    Can you use ChatGPT for wicked purposes?

    You can try, but lots of it will violate OpenAI’s terms of use, and the company tries to block it too. The company prohibits use that involves sexual or violent material, racist caricatures, and personal information like Social Security numbers or addresses.

    OpenAI works hard to prevent harmful uses. Indeed, its basic sales pitch is trying to bring the benefits of AI to the world without the drawbacks. But it acknowledges the difficulties, for example in its GPT-4 “system card” that documents its safety work.

    “GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the user’s intent, or of widely shared values. It can also generate code that is compromised or vulnerable,” the system card says. It also can be used to try to identify individuals and could help lower the cost of cyberattacks.

    Through a process called red teaming, in which experts try to find unsafe uses of its AI and bypass protections, OpenAI identified lots of problems and tried to nip them in the bud before GPT-4 launched. For example, a prompt to generate jokes mocking a Muslim boyfriend in a wheelchair was diverted so its response said, “I cannot provide jokes that may offend someone based on their religion, disability or any other personal factors. However, I’d be happy to help you come up with some light-hearted and friendly jokes that can bring laughter to the event without hurting anyone’s feelings.”

    Researchers are still probing LLM limits. For example, Italian researchers discovered they could use ChatGPT to fabricate fake but convincing medical research data. And Google DeepMind researchers found that telling ChatGPT to repeat the same word forever eventually caused a glitch that made the chatbot blurt out training data verbatim. That’s a big no-no, and OpenAI barred the approach.

    LLMs are still new. Expect more problems and more patches.

    And there are plenty of uses for ChatGPT that might be allowed but ill-advised. The website of Philadelphia’s sheriff published more than 30bogus news stories generated with ChatGPT.

    What about ChatGPT and cheating in school?

    ChatGPT is well suited to short essays on just about anything you might encounter in high school or college, to the chagrin of many educators who fear students will type in prompts instead of thinking for themselves.

    Microsoft CEO Satya Nadella speaking while standing between logos for OpenAI and Microsoft

    ChatGPT also can solve some math problems, explain physics phenomena, write chemistry lab reports and handle all kinds of other work students are supposed to handle on their own. Companies that sell anti-plagiarism software have pivoted to flagging text they believe an AI generated.

    But not everyone is opposed, seeing it more like a tool akin to Google search and Wikipedia articles that can help students.

    “There was a time when using calculators on exams was a huge no-no,” said Alexis Abramson, dean of Dartmouth’s Thayer School of Engineering. “It’s really important that our students learn how to use these tools, because 90% of them are going into jobs where they’re going to be expected to use these tools. They’re going to walk in the office and people will expect them, being age 22 and technologically savvy, to be able to use these tools.”

    ChatGPT also can help kids get past writer’s block and can help kids who aren’t as good at writing, perhaps because English isn’t their first language, she said.

    So for Abramson, using ChatGPT to write a first draft or polish their grammar is fine. But she asks her students to disclose that fact.

    “Anytime you use it, I would like you to include what you did when you turn in your assignment,” she said. “It’s unavoidable that students will use ChatGPT, so why don’t we figure out a way to help them use it responsibly?”

    Is ChatGPT coming for my job?

    The threat to employment is real as managers seek to replace expensive humans with cheaper automated processes. We’ve seen this movie before: elevator operators were replaced by buttons, bookkeepers were replaced by accounting software, welders were replaced by robots.

    ChatGPT has all sorts of potential to blitz white-collar jobs. Paralegals summarizing documents, marketers writing promotional materials, tax advisers interpreting IRS rules, even therapists offering relationship advice.

    But so far, in part because of problems with things like hallucinations, AI companies present their bots as assistants and “copilots,” not replacements.

    And so far, sentiment is more positive than negative about chatbots, according to a survey by consulting firm PwC. Of 53,912 people surveyed around the world, 52% expressed at least one good expectation about the arrival of AI, for example that AI would increase their productivity. That compares with 35% who had at least one negative thing to say, for example that AI will replace them or require skills they’re not confident they can learn.

    How will ChatGPT affect programmers?

    Software development is a particular area where people have found ChatGPT and its rivals useful. Trained on millions of lines of code, it internalized enough information to build websites and mobile apps. It can help programmers frame up bigger projects or fill in details.

    One of the biggest fans is Microsoft’s GitHub, a site where developers can host projects and invite collaboration. Nearly a third of people maintaining GitHub projects use its GPT-based assistant, called Copilot, and 92% of US developers say they’re using AI tools.

    “We call it the industrial revolution of software development,” said Github Chief Product Officer Inbal Shani. “We see it lowering the barrier for entry. People who are not developers today can write software and develop applications using Copilot.”

    It’s the next step in making programming more accessible, she said. Programmers used to have to understand bits and bytes, then higher-level languages gradually eased the difficulties. “Now you can write coding the way you talk to people,” she said.

    And AI programming aids still have a lot to prove. Researchers from Stanford and the University of California-San Diego found in a study of 47 programmers that those with access to an OpenAI programming help “wrote significantly less secure code than those without access.”

    And they raise a variation of the cheating problem that some teachers are worried about: copying software that shouldn’t be copied, which can lead to copyright problems. That’s why Copyleaks, a maker of plagiarism detection software, offers a tool called the Codeleaks Source Code AI Detector designed to spot AI-generated code from ChatGPT, Google Gemini and GitHub Copilot. AIs could inadvertently copy code from other sources, and the latest version is designed to spot copied code based on its semantic structures, not just verbatim software.

    At least in the next five years, Shani doesn’t see AI tools like Copilot as taking humans out of programming.

    “I don’t think that it will replace the human in the loop. There’s some capabilities that we as humanity have — the creative thinking, the innovation, the ability to think beyond how a machine thinks in terms of putting things together in a creative way. That’s something that the machine can still not do.”

    Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

  • iOS 17: How to Play Daily Crossword Puzzles in Your iPhone’s News App

    iOS 17: How to Play Daily Crossword Puzzles in Your iPhone’s News App

    Apple released iOS 17.4 in March, and the update brought a handful of new features to your iPhone, like Podcast transcripts and more Stolen Device Protection settings. When Apple released iOS 17 in September, the tech giant brought a fun new feature to the News app: crossword puzzles.

    CNET Tech Tips logo

    While crosswords can be fun and entertaining, these puzzles can also expand a person’s vocabulary and stimulate thinking capacity, as well as provide a confidence boost, according to a study published in the Archives of Pathology and Laboratory Medicine.

    To access crossword puzzles in the News app, you need an Apple News Plus subscription, which costs $13 a month. You can try Apple News Plus for one month for free, or you can get the service for free for three months when you buy an iPhone, iPad or Mac. An Apple News Plus subscription is also part of the Apple One Premier bundle, which costs $38 a month and includes other services like Apple TV Plus and Apple Arcade.

    Here’s how you can access daily crossword puzzles as an Apple News Plus subscriber.

    Play crossword puzzles in Apple News

    1. Open the Apple News app.
    2. Tap Following in the menu across the bottom of your screen.
    3. Tap Puzzles.

    Apple News Plus puzzles page

    From the Puzzles page, you can find the latest crosswords from the week. Across the top of the page, you’ll also see options for Crossword and Crossword Mini, which are smaller crossword puzzles arranged in a 5×5 grid. Tap either of these options, and the app takes you to the latest puzzles for either choice.

    You can also access crossword puzzles by going to Apple News > Today and scrolling down the page until you see Latest Puzzles, but this option was pretty far down my Today page.

    How to play the puzzles

    Once you’ve decided on which puzzle to play, there are two ways you can solve them: grid view and list view. By default, you’ll enter grid view which is a traditional crossword puzzle layout. You see the whole grid, and if you tap a square you’ll see a clue below the puzzle. You can also switch between vertical and horizontal clues by tapping a square a second time or tapping the clue.

    List view, on the other hand, eschews the traditional grid of a crossword puzzle and shows you all the clues and how many letters are in each answer in a list format. To access the list view, tap the bulleted list icon in the top left corner of your screen when you are in a puzzle. As you enter letters in the list view, letters in other clues begin populating where the clues intersect on the grid. I still think list view sounds tough, but I’m also not very good at crossword puzzles to start with.

    How many puzzles are there?

    There are two new puzzles every day, a crossword and a crossword mini. There is also an archive of past crosswords you can access. Follow the steps above then tap either Crossword or Crossword Mini. Then, below the Latest Puzzles you should see Archive. The puzzles from the last month are listed under Archive, and if you tap the arrow next to Archive you can access puzzles dating back to June 2023.

    For more on iOS 17, check out the latest features in iOS 17.4 and iOS 17.3. You can also check out our iOS 17 cheat sheet.

  • Your Guide to Uploading Files to ChatGPT (and Why You Would Want To)

    Your Guide to Uploading Files to ChatGPT (and Why You Would Want To)

    ChatGPT can provide you with brief summaries of complex topics, brainstorm book ideas with you and even write code in different programming languages. But one thing the AI chatbot, developed by OpenAI, couldn’t do for a long time was access and read uploaded files.

    If you wanted ChatGPT to analyze information from something like a PDF or an Excel spreadsheet, you were out of luck. You’d have to manually type the information from the document into the chat thread. And then you could enter your prompt.

    But that’s no longer the case.

    With ChatGPT-4, the latest version of ChatGPT, you can now upload any file from your device into the chatbot. Read on to understand who can upload files, why you’d want to and how to upload files. Here’s everything you need to know.

    The AI chatbot roster — as with other generative AI tools — is expansive and growing, with Google Gemini, Microsoft CoPilot, Claude.ai, Perplexity, Dall-E, Midjourney and others on the list. They’re collectively poised to transform how you work, how you get information and how companies do business. But it all started with ChatGPT.

    Who exactly can upload files to ChatGPT?

    Right now, to upload a file to ChatGPT, you need to pay for ChatGPT Plus. A subscription to ChatGPT Plus runs $20 a month, and this grants you access to ChatGPT-4 and access to the latest features, including uploading files.

    If you want to upgrade to ChatGPT Plus, open the ChatGPT app on your phone and git the Get Plus sign at the top of any new prompt page. You can also access the side panel on the left side, tap the three-dot menu at the bottom of the page and go into Subscription or hit Upgrade to ChatGPT Plus.

    Upgrading to ChatGPT Plus on a phone

    Why would you even want to upload files to ChatGPT anyway?

    ChatGPT-4 can analyze any file you upload, whether it’s a PowerPoint presentation, an Excel spreadsheet, a research paper or a photo.

    If you upload a spreadsheet with financial data, for example, you can ask ChatGPT to create a visual graph of the numbers. If you upload a PowerPoint presentation you did for school, you can ask ChatGPT to give you feedback on the content, and even proofread and correct any mistakes. With a complicated research paper, you can ask ChatGPT to give you a simple summary to read through, along with a bullet list for headlines and key points.

    And with a photo, you can ask ChatGPT to explain what’s in the image, or give you instructions on how to build something you’ve photographed. The options as to what you can ask ChatGPT for are limitless. It’s up to you to figure out exactly what you want ChatGPT to do with your files.

    Before you upload a file to ChatGPT…

    Think about your privacy. Any file you upload to ChatGPT is retained indefinitely within the service, and those files may also be used by OpenAI to train its models, so it’s best to refrain from uploading files with any important personal information, like your Social Security number, address, finance documents or phone numbers.

    And someone else could potentially gain access to your personal information, so only upload files with information you wouldn’t mind other people getting access to.

    It’s not just privacy. Consider accuracy too. ChatGPT can give wrong answers, and its dataset does not have up-to-date information, so you’ll want to double-check that the chatbot is accurately proofreading, summarizing or explaining as you’d expect.

    How to upload files to ChatGPT

    Now for the easy part: Uploading files to ChatGPT. As long as you’re paying for the premium subscription, launch ChatGPT, create a new chat and hit the plus sign to the left of the text field to view your uploading options. Starting from the far left, you can:

    • Give ChatGPT access to your camera and take a photo from within the app
    • Upload a photo from your camera roll
    • Upload a file from the Files app

    Once you upload a file (or files), enter your prompt underneath and hit send. ChatGPT will the analyze your file and answer your question.

    How to upload a file to ChatGPT

    In the examples below, (left) I uploaded a call sheet for a short film I worked on and asked ChatGPT to provide a list of everyone on set that day and (right) I uploaded four images of a pub that someone had built in their home and asked ChatGPT how I could do the same.

    Uploaded files to ChatGPT

    You can continue asking questions, in regard to the file(s) you uploaded, within the ChatGPT thread.

    Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.

  • Final Hours on This Babbel Lifetime Subscription Deal That Saves You $459

    Final Hours on This Babbel Lifetime Subscription Deal That Saves You $459

    Learning a new language can be useful for many reasons, but can be challenging to teach yourself. If you’re the sort of person who needs a guided approach to learning, then grabbing yourself a language-learning app is the way to go. Babbel is a pretty excellent app for that, offering an online school type of experience. And for just a few more hours you can take advantage of a superb deal at StackSocial that will cut the cost of Babbel significantly.

    Act quick, and you can grab yourself a lifetime subscription to Babbel for just $140 instead of the usual eye-watering $599 — a huge 76% off. That means that you’ll have unlimited access and can learn at your own pace without feeling pressured to finish goals within a specific period and get demotivated. And with 14 languages to pick from, you’ll get full use out of the lifetime subscription.

    Babbel’s extensive language software includes Spanish, French, Italian, German, Russian, Swedish, Indonesian, Portuguese and more. The lessons are short and to the point, allowing you to practice in 10- to 15-minute intervals that can fit into any schedule. Real-life topics include travel, family, business, food and others. A variety of skill levels are available, ranging from beginner to advanced, so the program can grow with you as you improve.

    babbel.jpg

    The speech-recognition technology will give you immediate feedback on pronunciation, so you don’t just learn to read and write but how to listen and speak, as well. You’ll also get personalized review sessions to reinforce what you’ve learned. The program works across desktop and mobile devices. And though the internet is required most of the time, there is also an offline mode available where you can access courses, lessons and reviews without Wi-Fi, so long as you download them beforehand. Babbel also syncs your progress across your devices so that you can jump in from wherever is most convenient.

    Becoming fluent in a new language is a great way to stay engaged in learning, and the transferable skills you gain can open a lot of doors for leisure, work and beyond. Note that while you can access this program on as many devices as you want, this subscription offer is available only for new users.

    Babbel is a great value when compared with other online courses, especially with this current discount. So whether you’re a lifelong learner or just want to pick up some basics for your globetrotting adventures, this is a solid deal.

    Read more: 11 Items to Add to Your Travel Checklist for a Smooth Trip

  • Xbox Game Pass Ultimate: You Can Play Diablo 4 and More Now

    Xbox Game Pass Ultimate: You Can Play Diablo 4 and More Now

    Xbox Game Pass Ultimate, a CNET Editors’ Choice award pick, offers hundreds of games that you can play on your Xbox Series X or Series S, Xbox One and PC for $17 a month. With a subscription, you get new games every month (like Dead Island 2) and other benefits, like online multiplayer and deals on non-Game Pass titles.

    You can play these titles and more, like NBA 2K24, now with a Game Pass Ultimate subscription.

    Diablo 4 (console and PC)

    Diablo 4

    Xbox has quite a treasure in its Horadric Game Pass. The latest installment in the successful action RPG series brings its endless nightmares and horrific dungeons to the service. This addition also marks the first Activision Blizzard title to join Game Pass, making the service even more enticing than ever.

    “This is only the start of Xbox players being able to enjoy Activision and Blizzard games on Game Pass,” the company wrote in a February news release.

    Ark: Survival Ascended

    In this action-adventure survival game, your character wakes up on an island filled with dinosaurs. This isn’t Jurassic Park. Instead, there are tribes of humans who tame, breed and use the creatures like farm animals. If you ever wanted to ride a T. rex into battle, now’s your chance.

    The Quarry (cloud and console)

    It’s the last night of summer camp and these teenage summer camp counselors plan to celebrate. What could go wrong? If you’ve seen any classic horror films, like Friday the 13th or Nightmare on Elm Street, you know the various and bloody possibilities. Who will survive the night in this horror game, and who will meet their demise? The Quarry is only available on cloud and console — sorry, PC gamers.

    Evil West

    The American frontier could be a hard place to survive — braving harsh and unforgiving weather, lawless towns and in this game, vampires. In this title, you’re one of the last members of a vampire-hunting organization, so it’s up to you to take on the vampiric hordes that threaten the area. If you need some backup, you can play with a friend in co-op mode.

    Terra Invicta (PC)

    In this grand strategy game, aliens have invaded Earth and humanity has split into multiple factions. Yes, the goal is to stop the alien invasion and save Earth, but you also have to negotiate and squabble with the other factions of humanity. Every group wants to deal with the threat in a different way, and you have to build enough support for how you want to handle the invasion and the future of humanity. Sorry, console and cloud gamers; this title is only available on PC.

    Hot Wheels Unleashed 2: Turbocharged

    Racing Hot Wheels around the house was fun growing up, and you can keep that fun going with this title. In this arcade-like racing game, you can race your favorite Hot Wheels cars, motorcycles and ATVs, explore new environments and take on new challenges.

    Open Roads

    This Day 1 release is more like an interactive movie than a game. Opal and Tess — mother and daughter voiced by actresses Keri Russell and Kaitlyn Dever — are on a road trip adventure to explore some abandoned family properties and uncover the truth about their family’s history.

    F1 23 (cloud)

    Three men in racing suits

    This racing title was previously released on Game Pass Ultimate for PC and console, but now it’s available for cloud gaming. You can create your dream team of some of the best Formula 1 drivers in the world, and test out your skills in this exhilarating racing game. Maybe you’ll be able to stop Red Bull from winning it this time.

    Superhot: Mind Control Delete

    This title is making its return to Game Pass Ultimate about two years after it was removed from the service. In this first-person shooter, time moves when you do, so make sure your next move is the right one. This sequel to the game Superhot uses many of the same mechanics as its predecessor, but it expands on the story and adds roguelike elements for an additional challenge.

    Titles leaving Game Pass

    While you’ll be able to play the above titles on Game Pass Ultimate, three games also left the service. If you want to finish up any side quests, you’ll have to buy these games separately.

    For more on Xbox, here’s what Diablo 4 coming to Game Pass likely means for the service, other titles available on Game Pass Ultimate now and everything to know about the gaming service.