AI adoption in developing economies is masking rising costs and the growth of informal labour. In rapidly developing markets, the uptake of artificial intelligence is quietly driving financial strain and operational disruption, yet these challenges remain largely absent from mainstream managerial discourse. This silence persists even as governments continue to promote specialised, technology-led economic modernisation frameworks at national and international levels. While many business leaders champion AI with great optimism, expecting it to drive regional prosperity, tangible improvements in overall economic outcomes remain elusive in the near term.
According to data from information technology advisory firms, cloud computing costs have risen steadily by 25 to 70 per cent driven by the relentless operation of energy-intensive AI models. These models frequently deliver negligible returns on investment, with performance benchmarks showing that highly frequent outputs yield results that are non-negotiable in terms of replacement value across sectors.
A notable performance gap has emerged, giving rise to an informal labour segment. To mitigate algorithmic errors, companies increasingly rely on manual rectification, which substantially inflates operational costs. This gap warrants close attention, as numerous firms have reported the presence of informal “model custodians.” In parallel, a separate category of experienced data annotators who receive adequate compensation commensurate with their efficiency and skill works to reduce or correct algorithmic mistakes.
However, the manual correction of errors introduces its own complexities. Colloquial, dialect-based, and broader linguistic inaccuracies particularly prevalent in different languages, textual materials can intrude during the rectification process. These issues must be addressed with precision; otherwise, even corrected algorithms may continue to produce inaccurate results, undermining the reliability of AI systems in linguistically diverse environments.
However, privacy has become a myth in data transfer. Moving data from one AI platform to another through various stakeholders under the guise of editing, correction, grammar checking, dialect verification, and similar processes obscures the original sources of generation and integration. Passing data from one person to another, and from one AI system to another, may result in it being handled by multiple sources whose reliability is uncertain. This major concern remains inadequately addressed.
Regulations and legal systems have been unable to keep pace with rapid technological development. Ethical concerns related to AI have either been left half-addressed or entirely neglected, failing to coalesce into meaningful initiatives. Any legally binding frameworks that are developed may become obsolete almost immediately upon implementation.
All sectors – whether product or service industries, education, healthcare, or aviation – face similar difficulties when adopting AI. Ethical frameworks have yet to be reengineered for the effective utilisation and proper acknowledgment of exact AI sources in relation to AI-generated material.
One organisation recently reported successfully replacing a substantial number of customer support employees with AI agents. Initially, they observed an unexpected reduction in operational costs. However, this reduction proved short-lived, disappearing faster than it had been achieved. The question of responsibility for providing incorrect information cannot be assigned to any individual, as AI cannot be held accountable for what it delivers to customers. Misleading algorithms and mismatched information led to significant customer disappointment, requiring further rectification that doubled operational costs. Once a customer is lost, that loss is often permanent, leading to lasting financial burdens.
Persistent reliability concerns outweigh the cost savings for many organisations and outdated legal frameworks may lead to more contingencies than anticipated.
Irreplaceable human intelligence, creativity, and insight must be nurtured further by incorporating AI as an additional tool alongside human-augmented systems. This principle can never be compromised, regardless of how advanced algorithmic models become.
Dr Ansa Savad Salim
Assistant Professor, UoB