Category: Technologies

  • Techgeeks: Leading Chinese EV Brand Xpeng Aims for 2026 Launch of Flying Vehicles

    Techgeeks: Leading Chinese EV Brand Xpeng Aims for 2026 Launch of Flying Vehicles

    Xpeng, a rapidly rising name among Chinese electric vehicle manufacturers, is setting its sights on the heavens. The automaker is working toward a future where its flying cars could soon be delivered to early adopters. Although this concept may seem like something out of a sci-fi novel, Xpeng is already discussing order volumes, regulatory clearances, and large-scale manufacturing.

    What is the timeline for these aerial vehicles?

    According to Reuters , Xpeng anticipates launching mass production of its aerial vehicles in 2027. Company President Brian Gu revealed that the firm has secured over 7,000 orders, predominantly from within China. Currently, the automaker is navigating the approval process with national aviation regulators. Gu expressed optimism that full-scale deliveries could commence as early as next year, provided that all necessary certifications are finalized.

    The specific vehicle in focus is Xpeng’s AeroHT, known as the “Land Aircraft Carrier”. This system consists of a six-wheeled van equipped with a detachable two-seater electric aircraft mounted in its cargo area.

    Aerial mobility is just one facet of Xpeng’s strategy

    The executive is also backing other ambitious projects for imminent launch, including the mass production of humanoid robots scheduled for the fourth quarter of 2026. Furthermore, 2027 is slated to be a pivotal year for robotaxi trials conducted with global partners. Xpeng intends to initiate robotaxi testing in Guangzhou later this year, potentially resulting in the deployment of hundreds to thousands of autonomous vehicles within the next 12 to 18 months.

  • Self-driving cars promised to end traffic. New research suggests they might make it worse

    Self-driving cars promised to end traffic. New research suggests they might make it worse

    Self-driving cars promised a future where you sit back, relax, and glide past the gridlock while the car handles everything. A new study from the University of Texas at Arlington has some bad news for that fantasy. According to research, widespread adoption of autonomous vehicles could actually make traffic significantly worse.

    Professors Stephen Mattingly and Farah Naz conducted a meta-analysis on how self-driving cars could affect vehicle miles traveled (VMT). Their findings showed an average 5.95% increase in vehicle miles traveled. Non-shared autonomous vehicles pushed that figure even higher, to nearly 7%.

    “The rise of AVs could make commuting more convenient, but it may also lead to more pick-up and drop-off activity, more empty vehicle trips, and new costs.”

    The logic is simple. When your car can drop you off and drive itself home, or cruise around looking for rides, roads get busier. As Dr. Mattingly put it, “Where will commuters send their car when they don’t need it?” Will it be sent to a parking lot, sent to try to find other riders, or sent home?”

    Are robot taxis already causing chaos on the streets?

    To put it succinctly, the research shows that robotaxis are already causing an increase in vehicle miles traveled, and once their adoption becomes universal, it will put extreme pressure on existing infrastructure. But that’s in the future; if current news reports are anything to go by, the robotaxis are already causing havoc on roads.

    For example, Waymo launched in Nashville on April 7, 2026, and within five days, people were posting viral videos of its robotaxis freezing at intersections and driving into restricted zones. In December 2025, a San Francisco power outage left dozens of Waymo vehicles frozen at intersections city-wide.


    It’s not only a US-specific issue. Just a few weeks back, dozens of Baidu robotaxis simultaneously stopped on elevated highways in Wuhan, China, stranding passengers mid-traffic for over an hour.

    See More

    These are just a few examples. Dozens of similar incidents have occurred over the past few months, where robotaxis have gotten stuck for various reasons and caused traffic jams.

    This is happening while robotaxis are still largely in trial mode. Multiply this by a factor of a hundred or even a thousand, and it’s easy to imagine how much worse traffic could become in the future.

    So what happens next?

    Dr. Naz summed it up well: “AVs are not inherently good or bad. Their impacts will depend heavily on how they are deployed and governed.” Without smart policy ahead of mass adoption, the self-driving dream risks handing us a shinier, more expensive traffic jam.

    If we are to pay that price, autonomous vehicles must clearly demonstrate that they are safer and more reliable than human drivers, which they have failed to do till now.

  • SpaceX Plans In-House GPU Production Amid Nvidia Supply Constraints

    SpaceX Plans In-House GPU Production Amid Nvidia Supply Constraints

    Reports indicate that SpaceX is preparing to produce its own graphics processing units, the critical components driving artificial intelligence. This information was disclosed in excerpts from its S-1 registration statement, a mandatory filing with the U.S. Securities and Exchange Commission prior to a public offering.

    According to Reuters, SpaceX identifies “manufacturing our own GPUs” as a primary future capital expense. This follows Elon Musk’s recent announcement of a dedicated TeraFab chip facility designed to create hardware capable of withstanding extreme space environments and powering its orbital AI data centers.

    The Rationale Behind SpaceX’s Chip Manufacturing Ambitions

    The primary driver is supply chain security. During the TeraFab announcement, Elon Musk noted that even purchasing all currently available chipsets would satisfy only 2% of their projected future needs.

    The filing further cautions potential investors that SpaceX lacks long-term agreements with numerous chip vendors, leaving the company without assurance that it can procure sufficient computing hardware to fuel its expansion.

    While designing and producing custom chips appears to be the logical fix for supply shortages, semiconductor manufacturing is an extraordinarily intricate process. SpaceX is not traditionally a semiconductor firm, at least not until now.

    Is GPU Production a Feasible Goal?

    Frankly, it represents a monumental undertaking. In the same S-1 document, SpaceX acknowledged that its orbital data center initiatives might not succeed commercially. Advanced chip fabrication involves thousands of precise steps that must execute flawlessly, making it one of the most difficult challenges in engineering.

    TeraFab still has significant ground to cover to master these complexities. The industry is dominated by a few key players: ASML holds a near-monopoly on photolithography equipment, while TSMC controls the vast majority of high-end chip production.

    Musk has confirmed that Terafab will oversee the entire chip production lifecycle, from design and fabrication to packaging and testing, all within a single facility. Whether SpaceX can successfully execute this plan is yet to be determined.

  • Fall Detection: A Great Smartwatch Feature, But Google Wants Your Account First

    Fall Detection: A Great Smartwatch Feature, But Google Wants Your Account First

    Fall Detection stands out as one of the top reasons to own a smartwatch, particularly for those seeking a safety tool that requires minimal configuration.

    \n

    However, Google appears poised to restrict this flexibility on the Pixel Watch, as recently uncovered app strings indicate users will likely need to log in with a Google account to maintain access.

    \n

    Currently, Pixel Watch owners can still activate Fall Detection without linking an account. According to Android Authority’s APK teardown, code within version 4.4.0.897056328 of the Pixel Watch app reveals new alerts instructing users to sign into Personal Safety to retain the feature.

    \n

    These strings also hint at a potential grace period before access is revoked, although the specific duration remains unclear. Consequently, a feature currently available without an account may soon rely on account status.

    \n

    The App Already Signals the Shift

    \n

    The most compelling evidence lies in the language Google is integrating into the application. The new notifications indicate a future where users without an account will receive a countdown and a prompt to link Personal Safety to a Google account.

    \n

    \n

    This suggests the change is more of an impending policy update than a mere possibility.

    \n

    There is at least one practical benefit. Fall Detection settings can synchronize across devices once the watch is linked to an account, simplifying management over time, even as it strengthens Google’s ecosystem integration.

    \n

    Why This Change Is Significant

    \n

    This is important because Fall Detection is a core benefit of smartwatches, particularly for those purchasing one for an elderly relative or anyone seeking emergency assistance with minimal hassle.

    \n

    Requiring a login alters that dynamic. Most Pixel Watch purchasers likely won’t mind, as they typically sign in during initial setup.

    \n

    \n

    However, for users who appreciated the less intrusive approach, this would eliminate one of the more user-friendly aspects of the current experience.

    \n

    What to Expect Next

    \n

    Nothing here confirms that Google has already implemented this change. These findings come from code teardowns, and work-in-progress code doesn’t always make it to release.

    \n

    Nevertheless, the messaging appears sufficiently developed that Pixel Watch owners should prepare for this to become official unless Google alters its plans.

    \n

    The remaining uncertainty is timing. Google has yet to disclose the length of the grace period or when enforcement will start.

    \n

    Until then, the most reasonable expectation is that Fall Detection may soon become another Pixel Watch feature that functions best, or exclusively, once you are signed in.

  • AI Has Accelerated the Spread of the Most Heinous Online Abuse, and Regulators Are Struggling to Respond

    AI Has Accelerated the Spread of the Most Heinous Online Abuse, and Regulators Are Struggling to Respond

    Artificial intelligence has undoubtedly brought plenty of useful tools to the internet. But it has also handed one of the most horrific forms of abuse a grim new boost. Recent reporting and watchdog findings point to the same ugly pattern of generative AI helping offenders create child sexual abuse imagery on a greater scale.

    These are now becoming increasingly realistic, and in formats that are becoming harder for platforms, regulators, and child-safety groups to deal with.

    How AI is making the scale worse and content more extreme

    Back in February, Reuters revealed that actionable reports of AI-generated child sexual abuse imagery had more than doubled over the past two years, while the Internet Watch Foundation later said it identified 8,029 AI-generated images and videos of child sexual abuse in 2025 alone. This grim picture was also laid out in a Bloomberg report on how generative AI is changing the child sexual abuse material landscape in the US.

    Investigators aren’t just dealing with AI-generated pornographic images and videos anymore, there are even manipulated images of real children and even chatbot conversations where offenders allegedly seek grooming advice or role-play sexual abuse. Meanwhile, law enforcement is burning time trying to figure out whether a child in an image is real, digitally altered, or entirely fake.

    Real cases are getting more disturbing

    The report points to a Minnesota case involving William Michael Haslach, a school lunch monitor and traffic guard accused of using AI tools to digitally undress children in photos he had taken at work. Federal agents identified more than 90 victims and found nearly 800 AI-generated abuse images on his devices. This showcases how offenders are increasingly using everyday photos pulled from social media to create explicit material.

    Investigators are drowning in volume and bad leads

    The scale is getting ugly fast. Bloomberg reports that NCMEC received 1.5 million AI-linked CSAM reports in 2025, up from 67,000 a year earlier and 4,700 in 2023. At the same time, investigators say automated moderation systems are generating a flood of junk tips, swamping already overstretched task forces. And every wrong call burns time that could have gone toward a child facing immediate harm.

  • AI has supercharged the most disturbing type of online abuse, and regulators are struggling to respond

    AI has supercharged the most disturbing type of online abuse, and regulators are struggling to respond

    Artificial intelligence has undoubtedly brought plenty of useful tools to the internet. But it has also handed one of the most horrific forms of abuse a grim new boost. Recent reporting and watchdog findings point to the same ugly pattern of generative AI helping offenders create child sexual abuse imagery on a greater scale.

    These are now becoming increasingly realistic, and in formats that are becoming harder for platforms, regulators, and child-safety groups to deal with.

    How AI is making the scale worse and content more extreme

    Back in February, Reuters revealed that actionable reports of AI-generated child sexual abuse imagery had more than doubled over the past two years, while the Internet Watch Foundation later said it identified 8,029 AI-generated images and videos of child sexual abuse in 2025 alone. This grim picture was also laid out in a Bloomberg report on how generative AI is changing the child sexual abuse material landscape in the US.

    Investigators aren’t just dealing with AI-generated pornographic images and videos anymore, there are even manipulated images of real children and even chatbot conversations where offenders allegedly seek grooming advice or role-play sexual abuse. Meanwhile, law enforcement is burning time trying to figure out whether a child in an image is real, digitally altered, or entirely fake.

    Real cases are getting more disturbing

    The report points to a Minnesota case involving William Michael Haslach, a school lunch monitor and traffic guard accused of using AI tools to digitally undress children in photos he had taken at work. Federal agents identified more than 90 victims and found nearly 800 AI-generated abuse images on his devices. This showcases how offenders are increasingly using everyday photos pulled from social media to create explicit material.

    Investigators are drowning in volume and bad leads

    The scale is getting ugly fast. Bloomberg reports that NCMEC received 1.5 million AI-linked CSAM reports in 2025, up from 67,000 a year earlier and 4,700 in 2023. At the same time, investigators say automated moderation systems are generating a flood of junk tips, swamping already overstretched task forces. And every wrong call burns time that could have gone toward a child facing immediate harm.

  • NASA Targets Early September for Roman Space Telescope Launch, Marking a Major Timeline Shift

    NASA Targets Early September for Roman Space Telescope Launch, Marking a Major Timeline Shift

    NASA is now targeting an early September 2026 launch for the Nancy Grace Roman Space Telescope, accelerating the schedule from a previous deadline of May 2027. This shift positions the mission as a critical event to monitor in the coming months.

    The rationale is straightforward: Roman is engineered to capture expansive sections of the sky using high-resolution infrared imaging.

    Rather than zooming in on isolated regions, the observatory is designed to conduct wide-field, deep-sky surveys that will enable researchers to investigate dark energy, dark matter, exoplanets, galaxies, and stars with unprecedented scale.

    NASA anticipates the telescope will generate a 20,000-terabyte data archive during its five-year primary mission. This extensive dataset is expected to facilitate research on 100,000 exoplanets, hundreds of millions of galaxies, and billions of stars, highlighting why an earlier launch date carries significant scientific weight.

    A Mission Designed for Broad Coverage

    Roman’s primary strength lies in its expansive field of view. By merging a wide observational range with sharp infrared imaging, the telescope offers astronomers a highly efficient tool for mapping large swaths of space, outperforming instruments designed for narrower targets.

    This broad capability is crucial because the mission supports diverse scientific goals. While Roman focuses on dark energy, dark matter, and exoplanets, NASA notes its instruments will also aid in discovering rare celestial events and previously unknown objects.

    Significance for the Astronomy Community

    The long-term value of Roman will stem as much from its data archives as from its initial findings. A survey of this magnitude will provide researchers with a comprehensive resource to revisit for years, enabling them to cross-reference observations, validate theories, and identify key targets for other observatories.

    This utility extends beyond NASA, influencing the broader astronomical community. Roman’s data could help determine future research priorities, a hallmark of how major observatories impact the field.

    Key Milestones Ahead

    The telescope is scheduled to arrive at Kennedy Space Center in June, with a planned launch aboard a SpaceX Falcon Heavy from Launch Complex 39A in Florida.

    Following closely after Artemis II, NASA’s first crewed lunar flyby in over five decades, this accelerated schedule contributes to a period of significant momentum for the agency.

    NASA will announce the exact launch date as prelaunch preparations advance, making the confirmation of the early September target the next critical milestone.

  • TORRAS × FPF Collection Merges Intelligent Design With The Essence Of Worldwide Soccer

    TORRAS × FPF Collection Merges Intelligent Design With The Essence Of Worldwide Soccer

    The lineup focuses on TORRAS’ iconic accessories, such as their phone cases with built-in kickstands for the iPhone 17 series. Instead of highlighting general use cases, the focus is on sports-related contexts – from training drills and match prep to post-game reviews and relaxed moments where soccer stays woven into daily routines.

    The kickstand mechanism is central to this functionality. Engineered for resilience and adaptability, it enables users to angle their devices in various positions, simplifying tasks like reviewing training clips, capturing drills, or watching match highlights without needing to hold the phone. These are standard activities for athletes and fans who live and breathe the game beyond just watching it on TV.

    The design supports these moments seamlessly, never breaking the flow. Whether you’re setting your phone on the sideline to capture a practice or quickly propping it up to study tactics, the accessory becomes an integral part of the process rather than a secondary tool.

    Design Drawn From Football Heritage

    The visual style of the TORRAS × FPF collection takes direct inspiration from the Portugal National Team’s identity. Details drawn from the team’s colors, history, and sense of pride are woven into the design with care and subtlety.

    Rather than relying on loud logos, the collection focuses on nuanced details. The aesthetic captures the spirit of football not through obvious symbols, but through subtle cues that echo the discipline and commitment the sport demands. This approach mirrors the reality of soccer itself, where the real work happens away from the spotlight – through practice, preparation, and repetition.

    By keeping the design understated, the products remain versatile for daily use while still honoring the sport. The outcome is a look that feels purposeful without being overbearing, letting the essence of Portuguese football live within the product rather than just resting on its surface.

    Crafted For Daily Football Rituals

    This lineup is built for people who interact with football as part of their daily life, not just during matches. It targets athletes, sports fans, and content creators who actively participate in training, analysis, or recording as part of their engagement with the game.

    It also serves those who document and share these experiences – whether capturing a practice, analyzing tactics, or tracking progress over time. In these situations, accessories like the Ostand case act as more than just gear; they enable how these moments are captured and revisited.

    Crucially, the positioning avoids limiting the audience. While smartphones are key to these interactions, the use cases extend beyond a “mobile-first” perspective. The products are equally valuable on the field or at training grounds, where hands-free convenience and quick adjustments are vital.

    As the 2026 World Cup nears, the timing of the collection aligns with a worldwide surge in football interest. Yet, instead of being tied to the tournament itself, the products are rooted in the everyday experiences that surround it – before, during, and after the games.

    Functionality Tailored To The Pitch

    At the heart of the collection is the Ostand kickstand system, refined to fit naturally into sports contexts. Its multi-angle adjustment lets users switch between tasks quickly, whether capturing video, checking live scores, or reviewing content on the fly.

    The design prioritizes ease of use without adding clutter. This is especially important in sports settings, where interactions are fast-paced, situational, and hands-on. Being able to position a device instantly, without extra gear, makes the accessory far more practical in these environments.

    Rather than highlighting technical specs or abstract benefits, the product’s value shines through how it enhances these interactions. It’s not about creating new habits, but making existing ones smoother.

    Beyond A Simple Partnership

    The TORRAS × FPF collection ultimately signals a shift in how brand collaborations are being approached. Instead of focusing on surface-level branding or symbolic storytelling, it centers on everyday moments that define how people experience football.

    The emphasis is not on the result—the goals, the victories, or the spectacle – but on the routines that lead up to them. Training sessions, preparation, repetition, and personal progress form the foundation of the sport, and this is where the collection finds its relevance.

    By integrating Portugal-inspired design elements into functional accessories, TORRAS creates a connection that feels natural rather than constructed. The products support how users already engage with their devices, while subtly reflecting the culture and identity of football in the background.

    In doing so, the collaboration moves beyond being a themed release. It becomes a way to capture and support the everyday relationship people have with the sport – one that extends far beyond the pitch.

  • ChatGPT workspace agents turn AI into a team member

    ChatGPT workspace agents turn AI into a team member

    OpenAI is pushing ChatGPT beyond just answering questions, and this latest update makes that shift pretty obvious. With workspace agents, ChatGPT is starting to look less like a chatbot and more like a full-blown work assistant.

    What are workspace agents in ChatGPT, and how do they work?

    OpenAI has introduced workspace agents, which are essentially shared AI agents designed to handle complex, multi-step tasks across teams.

    Introducing workspace agents in ChatGPT—shared agents that can handle complex tasks and long-running workflows across tools and teams. pic.twitter.com/eHplfXCWlk

    — OpenAI (@OpenAI) April 22, 2026

    Unlike regular prompts, these agents don’t just respond once and stop. They can plan, execute, and continue working in the background, even after the user steps away. They run in the cloud, meaning they can keep processing workflows, updating outputs, and handling tasks over time without constant input. 

    What makes them different is how deeply they integrate into workflows. These agents can access files, run code, connect to tools, and even operate across platforms like ChatGPT and Slack.

    Why is OpenAI turning ChatGPT into a team assistant?

    This feels like a natural next step in the AI race. Tools like ChatGPT have already become essential for writing, coding, and research. Workspace agents take that further by automating entire workflows instead of just assisting with parts of them.

    For example, a team could create a shared agent that tracks feedback, summarizes reports, responds to internal queries, and even flags issues automatically. Instead of multiple people doing repetitive tasks, the agent handles it continuously in the background.

    There is also a strong collaboration angle here. These agents are designed to be shared within organizations, meaning teams can build one workflow and reuse it across projects, improving it over time instead of starting from scratch each time.




    Of course, this is still early. These agents operate within permissions, require setup, and are meant to assist rather than replace human decision-making. But the direction is clear. ChatGPT is no longer just something that helps you think. It is slowly becoming something that works alongside you.

  • Apple’s Incoming Leader Vows Bold AI Advances While Preserving Core Design Philosophy

    Apple’s Incoming Leader Vows Bold AI Advances While Preserving Core Design Philosophy

    Techgeeks’s next CEO, Jorn Ternus, is expressing significantly more confidence regarding artificial intelligence, yet he shows no intention of transforming the company into a replica of its competitors. According to Bloomberg, Ternus informed staff during an all-hands gathering that artificial intelligence will unlock “almost unlimited potential” for Apple, creating entirely new avenues across its product and service lines.

    Simultaneously, he emphasized that Apple will continue to prioritize design at the core of its operations, remaining true to its fundamental identity. As previously noted, Apple recently revealed that Ternus will assume the CEO role on September 1, 2026, while Tim Cook transitions to executive chairman.

    What is Apple’s new CEO promising?

    When discussing AI, Apple has often appeared cautious, sometimes lagging behind tech giants like Google, Microsoft, and OpenAI. However, Ternus seems eager to convey a much more optimistic outlook on AI.

    Bloomberg’s report indicates that he told employees he is particularly thrilled to take on this role at this juncture, describing it as the most exhilarating period in his Apple career for developing products and services. Apple’s incoming CEO is stepping into a company that continues to face scrutiny over its AI execution. Thus, these comments serve as an early indicator of his strategic priorities.

    But not everything is changing

    Despite this more robust language around AI, Ternus did not propose a drastic identity shift. Apple is not suddenly becoming an AI-first company. He stated that certain elements “can never change and won’t change,” asserting that the company’s commitment to privacy, security, and environmental sustainability will remain steadfast, and that Apple’s mission and character will remain unchanged under his leadership.

    Thus, the next Apple CEO appears to be reassuring both employees and customers that Apple can intensify its AI efforts without abandoning the distinctive traits that set it apart from the rest of the industry. For a company that has long positioned itself as the intersection of hardware, software, privacy, and industrial design, this reassurance may be just as critical as any AI promise.