Latest News

  • The Rise of Secure Hardened Container Images

    The Rise of Secure Hardened Container Images

    The software development life cycle relies heavily on the integrity of containerized environments. As secure software delivery becomes standard in the development process, more teams seek hardened container images and similar hardened container solutions that deliver security without slowing build times. This change shows that container security has become a common need, not just an extra feature for a few specialized sectors. It has become a baseline for security teams that want faster deployment, smaller attack surfaces, and cleaner production environments from the very beginning of the coding process. 

    The Rise of Hardened Image Standards 

    For years, many developers treated container hardening as something only large enterprises needed, long after a product had matured. That idea is fading as organizations understand the numerous threats present in the current digital environment. Today, smaller teams, maintainers of open source projects, and growing SaaS companies are under pressure to ship software that is secure from the first commit. 

    This helps explain the rising interest and how hardened images are constructed and distributed. Developers are not only asking which images are secure but also which ones naturally fit into the tools they already use. A secure image only helps if it works within real development cycles, including local testing and CI pipelines. Security tools only stick when developers don’t feel they have to fight them constantly during a sprint. 

    Adoption is ultimately driven by practicality and the need for stronger defaults. Teams work to reduce their vulnerability risk while keeping their operations quick and flexible. They prefer to stick with their current workflows instead of switching to completely new methods just to secure a primary image. The industry has focused on specialized, lightweight container solutions to meet this need for balance. 

    The Practical Appeal of Minimal Images 

    Minimal container images are attractive because they reduce complexity by design. Using fewer packages typically leads to having fewer components to update in libraries to monitor. This reduces the risk that hidden vulnerabilities will be missed in production. When developers remove unnecessary binaries and shells, they reduce the attack surface. This makes it harder for exploits to succeed. 

    The technical community emphasizes that image composition is a primary factor in overall system safety. As noted in research by the National Institute of Standards and Technology (NIST), “Containers provide a portable, reusable, and automatable way to package and run applications.” However, the agency also notes that the image itself can pose a risk if organizations do not manage trusted content and configurations carefully. 

    Many developers focus on image size and composition as their first line of defense. A smaller image is not automatically more secure, but it is often much easier to audit and maintain over time. For instance, an independent developer who employs a lightweight API may not require a comprehensive basic image that includes numerous features. They can maintain a rapid runtime and reduce the number of products that require security vulnerability checks by employing a compact, secure image.  

    In the real world, this includes updating old workflows.  

    Think about a situation where a group of developers needs to update an old container configuration for an app that offers financial services. There are likely many terminals, debugging tools, and package managers that were useful when the app was first created in the old images. While these tools helped with troubleshooting early on, they stay in the image even after it goes to production, which can create a risk. 

    By adopting a stronger strategy with minimal images, the team can eliminate unnecessary parts. This speeds up the security review for the compliance department. It also helps keep consistency across different environments. This makes sure that the software on a developer’s device is the same as the software that is running in the cloud. This example shows that it is often better to get rid of unnecessary parts than to add more security features to a system that is already complicated. 

    Prioritizing Developer Workflow Speed 

    The adoption of new security tools often fails when it adds too much friction to the daily routine. Teams are looking for approaches that improve security without demanding a complete change in how they build, test, and scan software. For a developer, the primary question is whether the image will work with the registry and scanner they already depend on. 

    If a security solution requires proprietary tooling or unique commands, it becomes hard to justify the migration efforts. This matter is particularly significant for open-source contributors and smaller teams without a dedicated security department. They need secure faults that do not create weeks of additional migration work or break existing automation scripts. 

    A project maintainer updating a public service may prefer a hardened image approach that aligns with common container tooling. If a strategy can offer security-first images while respecting the developers’ time, it will see much higher adoption rates. The goal is to make the secure path the path of least resistance for the person writing the code. 

    Ecosystem Fit and Long-Term Stability 

    Compatibility with the broader technical ecosystem is becoming a major differentiator in how teams choose their base images. Organizations do not buy or implement image security in isolation. They need it to fit with internal policies, software bill of materials (SBOM) workflows, and deployment automation. 

    When a hardened image works well only within a narrow ecosystem, some teams hesitate to use it. They worry about being locked into a specific vendor, especially if their underlying infrastructure is still under construction or in flux. Companies with mixed cloud environments want the ability to plug secure images into the existing processes rather than rebuild everything. 

    This worry is growing because the ability to adapt is important for staying safe from cyberattacks. Attackers keep changing their methods and adopting new technologies. New ways to protect against them also emerge. Since these attack methods are always evolving, development teams prefer tools that help them respond to threats more quickly. They want to be able to swap components or update base images without a total system overhaul. 

    The Evolution of Developer Priorities 

    The industry is seeing a clear shift in how developers view their security responsibilities. It is no longer a task relegated to a final check before a release. Instead, developers expect security to be built into the regular tools from the start. They want minimal images, faster builds, and better support for the languages they use most. 

    Many fortified image options show how important security efforts are for everyone. The ability to find and use these images will help teams of all sizes include security in their software delivery processes. This shift towards transparency and honesty strengthens the software supply chain’s resilience against new challenges. 

    The development community is working to create a more stable foundation for future applications by prioritizing minimalism and compatibility. Secure images play a key role as the foundation for this stability. When security is invisible and integrated, the entire ecosystem benefits from higher quality, more reliable code. 

  • LG G6 vs. C6 OLED TVs: What’s actually different, and which one should you buy?

    LG’s 2026 OLED lineup is headlined by the G6, but the C6 is likely the model most people will end up considering. On paper, both TVs share a lot, including LG’s new Alpha 11 AI processor Gen 3, along with similar gaming features and AI-driven tools.

    After seeing both models up close during LG’s recent reviewer workshop at its U.S. headquarters in New Jersey, the overlap becomes even more apparent, but so do the areas where they start to separate.

    The differences aren’t always obvious at first glance. If you’ve been trying to figure out what actually separates the G6 from the C6, and which one makes more sense for your setup, here’s what you need to know.

    The G6 is where LG is pushing OLED the hardest

    The G6 is positioned as LG’s flagship, and the focus this year is clearly on brightness.

    It combines a new panel with Hyper Radiant technology and LG’s Brightness Booster Ultra system, with claims of up to 3.9 times the brightness of a standard OLED. In real use, that shows up most clearly in HDR highlights and brighter scenes, where the G6 has more punch and better visibility.

    At the same time, LG is maintaining core OLED strengths. The G6 is certified for both “perfect black” and “perfect color,” so contrast and accuracy remain intact alongside the brightness gains.

    The C6 carries more of that experience than you’d expect

    While the G6 leads on paper, the C6 doesn’t feel like a major step down.

    It runs on the same Alpha 11 AI processor Gen 3 and includes many of the same core features, including Dolby Vision, Dolby Atmos, and LG’s updated AI-driven picture and sound tools.

    Brightness is improved over previous generations, even if it doesn’t reach the same peak levels as the G6. For most viewing scenarios, the gap is present but not always dramatic unless you are specifically comparing HDR-heavy content side by side.

    Gaming performance is essentially identical

    This is where the distinction between the two models almost disappears.

    Both the G6 and C6 support 4K at 165Hz, along with VRR, Nvidia G-Sync, and AMD FreeSync Premium. That level of support puts them closer to high-end gaming monitors than traditional TVs.

    LG is also focusing on low input lag and smoother motion handling, which makes both models equally capable for fast-paced gaming. If gaming is your priority, there’s little reason to choose one over the other.

    AI features are shared, not exclusive

    Both models use the same processing platform, and that shows in how similar their feature sets are.

    AI Picture Pro handles real-time image optimization, while AI Sound Pro can simulate virtual 11.1.2 surround sound. There’s also a personalization layer that adapts picture and audio settings based on your preferences over time.

    Filmmaker Mode with ambient light compensation adds another layer by adjusting the image based on room lighting without sacrificing accuracy.

    Where the gap really starts to show

    The biggest differences come down to performance ceiling and positioning.

    The G6 is built to push OLED further, especially in brightness and overall visual impact. It is also the model that scales up to larger, premium sizes, going as high as 97 inches.

    The C6 is designed to be more flexible. It starts smaller, at 42 inches, and is priced to fit a wider range of setups, from bedrooms to living rooms.

    So which one actually makes more sense?

    For most people, the C6 is the more balanced option. It delivers the key improvements LG is focusing on this year, including better brightness, updated processing, and strong gaming performance, without pushing into flagship pricing.

    The G6 still has the edge in peak performance, especially if brightness is a priority or you’re building a high-end home theater. But the gap between the two isn’t as wide as you might expect in everyday use.

  • You don’t want to trust Meta’s new Muse Spark AI with health advice

    Meta‘s new Muse Spark may be pitched as a smarter AI model, but based on early testing, it sounds like the kind of AI you really do not want anywhere near serious medical decisions.

    The recent WIRED report talked about the experience with Muse Spark. Meta’s health-focused AI model inside the Meta AI app did not show promising results. The chatbot reportedly encouraged users to upload raw medical information like lab reports, glucose monitor readings, and blood pressure logs, then offered to help analyze patterns and trends.

    All of this sounds pretty useful till you realize two immediate concerns. You’re handing over very sensitive data, and whether the AI is even remotely trustworthy enough to interpret it.

    What went wrong in the early tests?

    The first problem is kind of hard to ignore. In a day and age where your life already feels too transparent, Muse Spark is prying even further. It isn’t unexpected to give out the necessary information for an accurate diagnosis, but handing over your personal health records to a chatbot for advice doesn’t sound like a privacy risk.

    Unlike data shared with a doctor or hospital, information entered into a chatbot does not automatically come with the same expectations or protections people may assume are in place. This isn’t a professionally vetted opinion, and that’s what makes the idea shaky. The AI is being presented as a helpful tool, but the environment around it still looks much closer to a consumer product than a proper medical one.

    This isn’t even the worst part

    Aside from the typical privacy risks involved when sharing personal data with any tech giant, you’d at least expect to get a serviceable answer. But the more serious problem appeared to be with the quality of the advice. In WIRED’s testing, the chatbot reportedly generated an extremely low-calorie meal plan after being asked about weight loss and aggressive intermittent fasting.

    While the bot did flag some of the risks along this route, a warning does not mean much if the model then goes on to help the user do the dangerous thing anyway. This is where the real issue lies with a lot of AI health tools right now. They can sound cautious, informed, and seem balanced right up until the moment they start reinforcing bad assumptions. That polished tone can offer the wrong advice with confidence, which makes failure more dangerous.

  • Gmail mobile gets end-to-end encryption to shield your emails from snooping

    Gmail mobile gets end-to-end encryption to shield your emails from snooping

    Your most sensitive emails on Gmail now have a much better privacy lock on your phone. Google has officially started rolling out end-to-end encryption for Gmail to Android and iOS devices. 

    For the first time, eligible users on Android and iOS devices can compose and read encrypted emails natively, inside the Gmail app, without going through the hassle of downloading and installing third-party apps for the same. 

    How does E2EE work in Gmail for mobile?

    Gmail’s E2EE first arrived for desktop users in April 2025, marking its 21st birthday. The external recipient support was added later, in October 2025. Smartphones, however, didn’t get the feature, leaving a significant gap as far as privacy is concerned. 

    The April 2026 update finally bridges that gap. If you’ve read about E2EE and how it works on other messaging platforms, you can already guess its mechanism on Gmail: only you and the recipient can view the email. 

    While composing an email, you can tap the lock icon, select the “additional encryption” toggle, and then send the email. If the recipient uses Gmail, the email lands in their inbox, like any other regular email. However, if they’re on a different platform, they receive a secure link to read and reply via a web browser (without a Gmail account). 

    Who actually gets to access E2EE on Gmail for mobile?

    Here’s the catch. Gmail for mobile is getting E2EE, but only for Google Workspace Enterprise Plus accounts with the Assured Controls or Assured Controls Plus add-on. Admins must first enable Android and iOS access through the client-side encryption interface. 

    In other words, personal Gmail users on mobile don’t get access. Anyway, by closing the gap between Gmail for web and mobile, Google has removed a crucial concern for clients evaluating Workspace against the Microsoft 365 suite. 

  • The influencer economy’s invisible workers are first in line for the AI chop

    The creator economy loves a neat little fairy tale: one magnetic person, one camera, one lucky break. It’s a great story. It’s also nonsense.

    A lot of so-called organic growth has been industrialized for years. The Hollywood Reporter recently showed how major creators and media companies relied on armies of clippers to carve long videos into viral bait, turning audience growth into a volume game. And that operation never stopped with clippers. It sprawled into a wider layer of digital labor, from editors and thumbnail makers to virtual assistants handling scheduling, posting, inbox cleanup, and brand admin.

    Many of those workers sit in the same countries that power global remote services, including the Philippines and India, where outsourcing still employs millions. The Philippines’ IT-BPM sector closed 2024 with 1.82 million jobs and $38 billion in revenue, while India’s tech sector workforce reached 5.43 million in FY24.

    The creator economy didn’t invent this setup. It simply borrowed it, gave it ring lights, and called it hustle.

    The creator economy built a labor pipeline it could underpay

    What looked like spontaneity was often logistics with good lighting. Influencers didn’t just appear everywhere on TikTok, Reels, and Shorts by force of personality. They paid for a production chain that could cut clips, resize videos, write captions, schedule posts, and keep the content conveyor belt moving.

    That arrangement worked because the labor was affordable and mostly invisible. Now the same businesses that benefited from it are turning to tools like OpusClip, which promise to turn long videos into short clips and publish them across platforms with a click. The factory floor was always there. AI just wants fewer people on it.

    AI usually doesn’t kill the job first. It cheapens it

    This is the part the booster crowd likes to skip. A job usually doesn’t disappear in one dramatic moment. It gets stripped for parts first.

    The editor becomes the person checking AI cuts, fixing captions, swapping thumbnails, cleaning timestamps, repackaging clips, and posting them across five platforms because the software still does a few things badly enough to be embarrassing. Upwork’s 2026 skills report puts a number on the shift: demand for AI video generation and editing rose 329% year over year.

    That doesn’t mean human labor is gone. It means human labor is being pushed into babysitting the machine that’s learning how to absorb more of the work.

    The next shock lands in outsourcing hubs, not just creator mansions

    The easy version of this story is a rich influencer replacing an editor in Los Angeles. The more honest version reaches much farther. In Latin America, regional platforms such as Workana grew by serving workers shut out by language and market barriers on global platforms, with the World Bank describing Workana as the largest freelance and remote work platform in the region.

    So when AI starts squeezing this layer of work, the fallout won’t stop at a few creator agencies or freelance editors in big US cities. It’ll hit the remote workers in outsourcing economies who were told digital work was the safer future. The same system that turned customer support and back-office tasks into globally tradable labor did the same thing to creator work. It chopped the job into repeatable pieces, sent them abroad, and rewarded whoever could do them fastest and cheapest.

    That’s why the clipping story matters beyond creator gossip. AI isn’t crashing into some pristine meritocracy. It’s tightening the screws on a system that was already built to make workers interchangeable.

    The creator economy was perfectly happy with invisible human labor when it was cheap and easy to ignore. Now it’s discovering that the cleanest version of “organic reach” is one that no longer has to pay the army behind it.

  • YouTube Premium just got more expensive, and nobody got a heads up

    YouTube Premium just got more expensive, and nobody got a heads up

    We just reported on YouTube showing 90-second unskippable ads to non-YouTube premium members, and while YouTube denies even testing such long ad formats, it’s possible they were testing the waters before releasing it to the public.

    Since the bug or test backfired, generating understandable negative reactions from viewers, it seems that YouTube is taking a different approach to skin the same cat. The only difference is that instead of free users, the change will impact its paid customers. 

    As reported by Android Authority, in an email to its Premium subscribers, YouTube said it will raise prices across all tiers. YouTube Premium last raised its prices in July 2023, so it hasn’t been long enough to justify another price hike.

    It seems YouTube is following Netflix’s playbook, a company notorious for quietly raising its subscription prices

    Does every plan get a price increase?

    Unfortunately, yes. The Premium individual plan is going up from $13.99 to $15.99. The Family plan is jumping from $22.99 to $26.99 per month. Even the more affordable options aren’t safe.

    YouTube Premium Lite is going from $7.99 to $8.99, and the Student plan is getting the same $1 bump, going from $7.99 to $8.99 monthly. No plan escapes unscathed.

    Premium plan categories YouTube Premium prices
    (Old)
    YouTube Premium prices
    (New)
    Individual monthly plan $13.99 $15.99
    Individual yearly plan $139.99 $159.99
    Family monthly plan $22.99 $26.99
    YouTube Lite monthly plan $7.99 $8.99
    YouTube Student monthly plan $7.99 $8.99

    YouTube is not only increasing monthly prices but also yearly prices for its Premium tier. You can see how much the new prices are gonna hurt your wallet in the table above. 

    Is YouTube Premium still worth it?

    That depends on how much you value an ad-free experience. At $15.99 a month, you’re paying close to what many people pay for popular streaming services like Netflix or HBO Max. The difference is that YouTube’s content library is massive and largely free, and you’re essentially paying to remove ads and test some experimental features.

    If you’re on the fence, switching to the annual plan could help you save compared to paying monthly. For everyone else, this is one of those moments when you weigh how much those skippable ads really bother you or how big a hit you can take on your wallet before canceling the subscription.

  • Nvidia leak hints at unrestricted RAM for upcoming N1 laptop chips

    Nvidia leak hints at unrestricted RAM for upcoming N1 laptop chips

    NVIDIA’s long-rumored N1 chip just made another unofficial appearance, and if this latest leak is even remotely accurate, things could get very interesting for the future of laptops. A listing spotted on a Chinese resale platform appears to show an engineering sample motherboard featuring Nvidia’s upcoming System-on-Chip. It’s not exactly a formal announcement, and yes, skepticism is absolutely warranted. But the details are hard to ignore.

    An early look at NVIDIA’s next big move

    The listing, reportedly shared by an X user, showcases what appears to be a prototype motherboard built around the N1 chip. Judging by its layout, it leans toward a laptop design, though there’s a slim chance it could also fit into a tablet-like form factor. Either way, it aligns with earlier reports suggesting Nvidia is targeting thinner, more efficient gaming machines with this new silicon. And if that’s the plan, the company isn’t thinking small.

    The most striking detail here is the memory configuration. The board appears to feature eight RAM modules surrounding the N1 processor, totaling 128GB. This kind of setup hints at a far more flexible approach to memory than what we’re used to seeing in typical laptop chips. Whether that translates to real-world devices or remains a quirk of early prototypes is still up in the air, but it certainly suggests Nvidia isn’t playing it safe.

    Beyond the headline-grabbing RAM, the board includes two M.2 slots for storage, built-in Wi-Fi, and a selection of ports including HDMI, USB-C, and a headphone jack. There’s also a large cut-out that strongly suggests a blower-style cooling system. Which means, this chip might need serious airflow to keep things running smoothly, especially if Nvidia is targeting high-performance use cases.

    Take it with a grain of silicon

    NVIDIA CEO Jensen Huang has already confirmed that new chips developed in collaboration with MediaTek are on the way. That partnership could mark a significant shift, pushing Nvidia deeper into the PC space and setting up direct competition with heavyweights like Intel’s Core Ultra Series, AMD’s Ryzen AI lineup, and Qualcomm’s Snapdragon X chips. If the N1 delivers on performance while enabling slimmer designs, it could shake up what we expect from gaming laptops entirely.

    Of course, there’s one important caveat: this is still a leak. There’s no official confirmation that the listing is genuine, and engineering samples often differ significantly from final products. So while the details are exciting, they’re far from final.

    Still, leaks like this tend to surface for a reason. And if this one holds up, Nvidia’s entry into the CPU space might arrive sooner than expected. One thing’s certain: the chip wars are already heating up. If Nvidia joins the fight in full force, things are about to get a lot more competitive, and a lot more interesting.

  • Your old Kindle is getting left behind in May. Here’s what you can do and keep it going

    If you’re still rocking a first-generation Kindle Paperwhite or an older Kindle device, I have some bad news for you. Amazon is pulling the plug on all Kindle e-readers released in 2012 or earlier, starting May 20, 2026. 

    There’s a lot of confusion about what this means, so as an avid Kindle reader, I decided to cut through the noise and find out exactly what’s going to happen.

    Here’s what you can expect from this end-of-device support for your Kindle if it is facing the axe. 

    Is your Kindle on the chopping block?

    Before you start panicking, the first thing to do is to ensure that your Kindle is on the list of deprecated devices. Amazon has released the list of impacted Kindles on its support page. The list includes,

    • Kindle 1st Generation
    • Kindle 2nd Generation
    • Kindle DX
    • Kindle DX Graphite
    • Kindle Keyboard (3rd Generation)
    • Kindle 4
    • Kindle Touch
    • Kindle 5
    • Kindle Paperwhite 1st Generation
  • Kindle 1st Generation
  • Kindle 2nd Generation
  • Kindle DX
  • Kindle DX Graphite
  • Kindle Keyboard (3rd Generation)
  • Kindle 4
  • Kindle Touch
  • Kindle 5
  • Kindle Paperwhite 1st Generation
  • What is happening to your Kindle if it’s on the list

    If your Kindle is on this list, here’s what will happen. After the May 20th deadline, these devices will lose access to Amazon’s services. That means you won’t be able to buy, borrow, or download new books.

    You also won’t be able to register the device to an Amazon account. So, if you have an old Kindle you were planning to give to your grandma, you should register it to the new account right away.

    The more concerning thing for me is the last point on Amazon’s support page. “If you deregister, or factory reset an impacted device, you’ll not be able to reregister it or use the device in any way,” the page reads.

    I understand about the registering part, but it’s the “use the device in any way” that has me concerned. Kindle has always allowed us to sideload DRM-free books, but it seems the company will brick the device if someone tries to remove the Amazon account and pass it on to someone else after the deadline, which is concerning. 

    What happens to my already downloaded books?

    Here’s the silver lining, if you can call it that. If you stay logged in and don’t reset or deregister your device, you can read anything that’s downloaded on it. Your Amazon account and your entire Kindle library also remain accessible on other devices and apps.

    The first thing you should do is download any books from your cloud library before the May 20 deadline. This will ensure that you can at least access your current library on your Kindle. 

    As far as adding new books to your Kindle library is concerned, there are a couple of solutions you can try.

    I don’t like the fix Amazon is offering

    First, let’s go through the solution Amazon is offering. The company says you’ll still be able to download new books through the Kindle app. The free Kindle app works on Android, iOS, Mac, and PC, and gives you access to your entire existing library and any new books you download.

    But is this really a solution? People buy a Kindle for two reasons. First, it makes buying books super easy and offers the most comprehensive collection of books compared to other platforms. Second, the Kindle devices are actually good e-readers at reasonable prices.

    Removing the hardware from the equation makes Kindle like any other platform. If I have to read books on my phone or laptop, why would I stick to buying them on Kindle and not move to other platforms? 

    What about sideloading books?

    One thing that won’t go away is the ability to transfer personal documents and DRM-free books via USB. You can still plug the old Kindle into your computer and move files over manually. So if you have a collection of your own files, that option stays open.

    If you’re looking to build a library of free e-books, there are some great places to start. Project Gutenberg offers over 70,000 free books, mostly classics. Standard Ebooks takes those same public domain texts and gives them a proper, well-formatted treatment.

    For DRM-free paid books, Humble Bundle regularly sells e-book bundles at a steep discount. I also recommend checking your favorite authors’ websites. Some of them offer a way to buy ebooks directly from them, which you can then sideload on your Kindle. 

    I know these solutions are not perfect, but it’s better to use the device than throw it away as e-waste. 

    Should you just get a new Kindle?

    Finally, we come to one reason Amazon might be implementing this policy. Don’t get me wrong, providing nearly 14 to 18 years of support is impressive, but Kindles are relatively simple devices.

    They don’t need to support all the latest technologies. Amazon could have kept them working with the current set of features. But that would mean fewer people upgrading to a new Kindle.

    If your old Kindle works, there’s no immediate rush. You can keep reading what’s on it or sideload books if you can find what you want to read. However, if you buy books regularly or want to access your full library on a proper e-ink display, a newer Kindle Paperwhite is worth considering

    This might be a good time to look at Kindle alternatives. Keep your old library on the Kindle and look for a more open platform for future books.

  • Save $560 on the Acer Predator Helios 18 AI: RTX 5080, 24-core Ultra 9, and a 250Hz Mini-LED for under $2,600

    Save $560 on the Acer Predator Helios 18 AI: RTX 5080, 24-core Ultra 9, and a 250Hz Mini-LED for under $2,600

    The Acer Predator Helios 18 AI is down to $2,539.99, a $560 saving off its $3,099.99 comp value, and it represents the kind of spec sheet that leaves very little on the table. An RTX 5080 with GDDR7, a 24-core Intel Core Ultra 9, and a 250Hz Mini-LED panel at 1000 nits add up to a gaming laptop that doesn’t ask you to compromise anywhere that matters.

    What you’re getting

    The RTX 5080 is the obvious headline, and the 16GB of GDDR7 backing it up is the detail worth paying attention to. GDDR7 delivers significantly higher memory bandwidth than GDDR6, which translates to better performance at higher resolutions and under demanding ray tracing workloads. DLSS 4 support extends that further, using AI-based frame generation to push frame rates well beyond what the hardware alone would produce.

    The 18-inch Mini-LED panel on the Predator Helios 18 runs at 2560×1600, 250Hz, and 1000 nits with G-SYNC, which is the right screen for a machine at this price. Most gaming laptops in this bracket are still shipping IPS panels; the Mini-LED here is a meaningful step above in both brightness and contrast.

    The Core Ultra 9 275HX is a 24-core processor that boosts to 5.4GHz, built for sustained workloads rather than brief peaks. Two of the four memory slots are occupied, leaving room to expand to 128GB, and three M.2 slots give the Helios 18 storage upgrade options that most laptops at any price don’t offer. Dual Thunderbolt 5 ports, Wi-Fi 7, and a 5GbE ethernet port round out a connectivity spec that holds up just as well at a desk as it does on the move.

    Why it’s worth it

    RTX 5080 laptops are not cheap by any standard, but the Helios 18 packages that GPU with a display, processor, and connectivity spec that justifies the category. The $560 saving brings it to a price where the overall package is considerably harder to replicate from competing brands without spending more, and the Thunderbolt 5 and 5GbE ethernet give it a longevity argument that pure gaming specs alone don’t always make.

    The bottom line

    The Acer Predator Helios 18 AI at $2,539.99 is a no-compromise gaming laptop that delivers on every front. The RTX 5080 with GDDR7, 250Hz Mini-LED panel, 24-core processor, and Thunderbolt 5 connectivity add up to a machine that handles anything you put in front of it, and the $560 saving makes the decision easier than the price tag might initially suggest.

  • You will soon be able to turn off all Spotify videos across music and podcasts

    Update (8:45 AM PT): Spotify has now officially begun rolling out the feature globally, confirming that you can disable all video content across music and podcasts. The new controls are being added to settings across mobile, desktop, web, and TV. The company will also allow Premium and Basic users across Individual, Duo, Family, and Student plans, along with free users, to control how video content appears in the app.

    If you find Spotify’s music videos annoying, you will soon be able to turn them off. Spotify is adding new video controls that will let you turn off any and all video content inside the app. The update was shared by Rowland Manthorpe on X.

    Just got an email: Spotify is introducing controls which let users turn off video for music or podcasts, both for themselves and family plan members. I think the enshittification theory says this is impossible? Or is it actually a secret plot to make the service worse

    — Rowland Manthorpe (@rowlsmanthorpe) April 9, 2026

    How to turn off videos for music and podcasts on Spotify?

    The new controls are not available in my region yet. According to The Verge, the new controls to turn off videos in Spotify will appear under the “Content and display” section in your settings on mobile, or under the “Display” section if you are on desktop.

    There will be three separate toggles to work with. The first is an existing toggle that disables Canvas clips, which are the short, looping, autoplay videos that play in the background while a track runs.

    The second will be a brand new toggle that specifically turns off access to music videos. The third, also new, will disable all other video content on the platform, including podcast videos and vertical video. Together, these three controls will give you granular options to pick and choose exactly how much video you want in your Spotify experience.

    How do Spotify’s new video controls work for Family Plan subscribers?

    If you manage a Spotify Family Plan, you will be able apply these video controls to each individual member on your subscription, similar to how managed account controls already work.

    Once you disable video at the plan level for a specific member, that person will no longer have the option to switch to the video version of a song or podcast on their own.

    It will essentially lock the experience to audio only for whoever you choose, which could be handy if you manage a plan that includes younger family members.

    At the time of writing, Spotify hasn’t made any official announcement about the new video controls. The availability may also vary depending on your region and account. If you haven’t seen them appear yet, try updating your app and checking your settings over the next few days.