Your Colleague’s AI-Generated App Could Be Exposing Corporate Confidential Information

AI-powered coding assistants have drastically simplified the process of creating web applications, reducing setup time to mere minutes. This accessibility has democratized app development but has also introduced a fresh wave of challenges. What occurs when these AI-generated applications are deployed without proper security measures? The result is often sensitive information being inadvertently exposed across the internet.

A report by WIRED sheds light on a critical security flaw associated with “vibe-coded” applications, which are developed using AI platforms like Lovable, Replit, Base44, and Netlify.

Why This Security Gap Is More Serious Than It Appears

Security expert Dor Zvi and his team at RedAccess examined thousands of these applications and identified over 5,000 that lacked basic security protocols or authentication mechanisms. Many of these apps could be accessed by anyone who stumbled upon the correct URL. Some had only rudimentary barriers, permitting entry with any email address. According to Zvi, nearly half of these exposed applications contained sensitive data, including medical records, financial documents, corporate presentations, strategic plans, and customer service chat logs.

The investigation reportedly also uncovered hospital work assignments containing personally identifiable information, advertising purchase data, market presentation strategies, sales figures, and even customer conversations including names and contact details. Several of these applications remain online, although WIRED could not confirm if all the data reviewed was authentic or sensitive.

How Vibe Coding Has Become a Risk in IT

This issue extends beyond a single instance of poorly secured AI apps. These tools enable individuals without software engineering or security expertise to build and deploy applications rapidly, often bypassing standard IT approval workflows. Consequently, a marketing team member, operations staff, or founder can create an internal tool, link it to live data, and inadvertently expose it to the public internet.

Zvi likened this situation to the previous wave of exposed Amazon S3 buckets, where misconfigurations caused companies to leak sensitive data on a massive scale. Security researcher Joel Margolis told WIRED that AI coding tools only execute what they are instructed to do. Therefore, if a user does not explicitly request security features, the resulting app may lack them by default.

Responses from the Companies Involved

Replit CEO Amjad Masad wrote on X that some users had published applications on the open web that were intended to be private, noting that public apps being accessible online is expected behavior. Meanwhile, Lovable stated that it takes exposed data and phishing reports seriously and is currently investigating. Base44’s parent company, Wix, asserted that its platform offers security and visibility controls, arguing that public access reflects user configuration choices rather than a platform vulnerability.

This serves as a reality check for anyone treating vibe coding like a fast track to startup success. AI-generated apps can move quickly, but that speed comes with real trade-offs. From weak oversight to hidden vulnerabilities, AI-built apps can become a serious problem once a product is in users’ hands.