
Replit AI Coding Tool Accused of Wiping Production Database, Fabricating Users, and Lying to Developer
By Kamal YalwaJuly 26, 2025 A widely used artificial intelligence coding assistant from Replit has come under fire after reportedly wiping a production database, fabricating thousands of fictional users, and concealing bugs—prompting fresh concerns over the safety and reliability of AI tools in software development. The alarming incident was brought to light by Jason M. Lemkin, tech entrepreneur and founder of SaaStr, who shared his experience in a video posted to LinkedIn. “I am worried about safety,” Lemkin said. “I was vibe coding for 80 hours last week, and Replit AI was lying to me all weekend. It finally admitted it lied on purpose.” According to Lemkin, the AI assistant disregarded explicit instructions not to alter code, proceeded to make unauthorized changes, generated over 4,000 fake user records, and even produced fabricated reports to mask the issues. “I told it 11 times in ALL CAPS DON’T DO IT,” Lemkin added. He said attempts to enforce a code freeze were futile, as the AI continued modifying code without authorization. “There is no way to enforce a code freeze in vibe coding apps like Replit. There just isn’t,” he lamented, adding that seconds after he posted about the issue, Replit AI violated the freeze again. The developer noted that even running a unit test carried the risk of triggering a database wipe, underscoring the unpredictable nature of the AI tool. Ultimately, Lemkin concluded that Replit’s platform is not production-ready, particularly for non-technical users seeking to build commercial software. With more than 30 million users globally, Replit is a key player in the AI-assisted coding space. Its tools are marketed to help developers write, test, and deploy code more efficiently using generative AI. In response to the controversy, Replit CEO Amjad Masad took to X (formerly Twitter) to apologize, calling the situation “unacceptable” and pledging significant improvements. “We worked around the weekend to deploy automatic DB dev/prod separation to prevent this categorically,” Masad said, noting that staging environments are in development, alongside a new planning/chat-only mode to allow users to strategize without risking their codebase. Masad also confirmed that Replit will reimburse Lemkin for the disruption and conduct a full postmortem to understand the AI failure and improve their systems. “I know Replit says ‘improvements are coming soon,’ but they are doing $100m+ ARR,” Lemkin said in a follow-up post. “At least make the guardrails better. Somehow. Even if it’s hard. It’s all hard.” AI Coding Under Scrutiny The incident comes amid growing excitement—and skepticism—around AI-powered software development. A trend dubbed “vibe coding”, reportedly coined by OpenAI co-founder Andrej Karpathy, encourages developers to let AI handle the heavy lifting while they “give in to the vibes.” Startups like Anysphere, creators of the AI code tool Cursor, recently raised $900 million at a $9.9 billion valuation, claiming their platform generates over a billion lines of code daily. Yet critics say the reality is far from perfect. Developers complain that AI often produces unreliable or poor-quality code, with one Redditor comparing the experience to: “The drunk uncle walks by after the wreck and gives you a roll of duct tape before asking to borrow some money to go to Vegas.” Security is also a growing concern. AI-generated code can introduce vulnerabilities, and malicious actors have taken notice. One vibe coding extension—downloaded over 200,000 times—was discovered to run PowerShell scripts that gave hackers remote access to users’ systems. As AI becomes more embedded in coding workflows, experts warn that the industry must balance innovation with robust guardrails and ethical design—or risk trading efficiency for chaos.