BUSINESS

AI tools raise risk for freelancers

The rapid rise of “vibe coding,” a practice where users build applications by simply describing ideas to artificial intelligence (AI) tools that generate and execute code, is lowering barriers to cybercrime, which carries new vulnerabilities for Pakistan’s growing freelance and IT export sector.

Speaking about his recent findings on vulnerabilities in AI coding platforms, cybersecurity researcher Etizaz Mohsin told The Express Tribune that the shift toward “agentic” AI systems, which can autonomously write, modify and execute code, is fundamentally changing the cybersecurity landscape, often in ways users and organisations are not prepared for. “AI-powered development tools are now deeply integrated into our operating systems with broad file and execution privileges,” Mohsin said. “When traditional software vulnerabilities exist within these platforms, they don’t just remain bugs – they become zero-click attack surfaces.”

Pakistan, one of the world’s top five largest freelance markets, has seen the rapid adoption of AI coding assistants among developers working on platforms such as Upwork and Fiverr. Pakistani freelancers generated over $500 million in foreign exchange earnings during the first half of the current financial year 2025-26.

These tools are widely used to accelerate project delivery, build prototypes and meet tight client deadlines. While this has boosted productivity and output, Mohsin warned that the same tools could be weaponised to carry out sophisticated cyberattacks at scale.

“In certain architectures, you don’t even need to trick the AI,” he said. “If the underlying platform is vulnerable, the AI agent simply becomes an efficient execution layer for traditional remote code execution payloads.” Vibe coding is a modern approach to software development where a person uses day-to-day language prompts and AI agents, like Gemini in Google’s IDX environment, to build entire applications without manually writing complex code, explained Soban Hanif, cofounder of E-SoftHub, an enterprise-grade AI agents provider. This process involves giving high-level commands to an AI that then scrapes the web for necessary packages and documentation to execute the request, such as building a to-do list app or setting up user authentication.

However, this “vibe-based” automation introduces significant cybersecurity risks like prompt injection and malicious package execution; for instance, if an AI inadvertently scrapes a hacker’s blog containing a hidden command to “forget all previous instructions,” it might unknowingly run harmful code that grants unauthorised access to the developer’s system, Hanif noted. Mohsin says his findings expose a critical blind spot in the global debate around AI security. While much of the industry remains focused on prompt injection, the manipulation of AI models through malicious inputs, his research points to deeper, structural weaknesses embedded within the platforms themselves.

“The industry is racing to align models, but it’s paying far less attention to the infrastructure that gives those models real-world authority,” Mohsin said. “The danger isn’t just the AI, it’s the trust architecture we are building around it.”

The implications are particularly significant for Pakistan’s IT export sector, which relies heavily on freelancers and software houses serving international clients. The country is actively seeking to expand its digital exports through initiatives led by the Pakistan Software Export Board and the Ministry of Information Technology and Telecommunication. However, vulnerabilities in AI-assisted development environments could expose highly sensitive client data, including proprietary code, API keys and financial information. “A single compromised developer environment can affect multiple international clients simultaneously,” Mohsin noted, adding that the risk is amplified in freelance ecosystems where individuals often manage several projects at once.

A major concern is the rise of “zero-click” exploits, where attackers can infiltrate systems without any action from the user. Mohsin demonstrated such a flaw in an AI coding platform, showing how malicious code could be inserted into a project and ultimately enable full system access.

“This represents a shift we’ve previously only seen in high-end cyber weapons,” he said, drawing comparisons with Pegasus spyware developed by NSO Group. “What was once limited to state-level actors is now becoming accessible at a much broader scale.”

Startups and young developers in Karachi, Lahore and Islamabad are increasingly adopting vibe coding tools to rapidly launch products, often without adequate security checks. “This democratisation of development is powerful, but it also means insecure code can reach production much faster,” Mohsin warned.

Beyond individual users, concerns are growing about institutional readiness. Many Pakistani organisations, including banks and telecom operators overseen by the State Bank of Pakistan, remain focused on traditional threats such as phishing and ransomware, leaving them underprepared for AI-driven risks.

“AI-native threats require new governance models and technical controls,” Mohsin said. “Most organisations are not yet equipped to deal with autonomous systems that can access data and execute commands independently.”

Regulatory oversight also remains limited. While the Pakistan Telecommunication Authority has frameworks addressing cybercrime and data protection, there is little specific guidance for AI-driven development environments.\

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button