The global rush to embrace artificial intelligence (AI) has transcended mere fascination with high-tech applications. It’s now firmly interwoven with critical issues such as water use, energy consumption, and the sustainability of our environment. Countries like China and Indonesia are leading the charge to impose necessary regulations aimed at curbing the more energy-hungry and addictive applications of AI.
In December 2025, China’s cyber regulatory authority released draft rules to govern AI systems designed to mimic human personas and foster emotional connections. This sweeping proposal not only calls for disclaimers about excessive use but also dictates that service providers must identify signs of user addiction and intervene when negative emotional states are detected. Such proactive measures signify a shift towards more responsible AI deployment, focusing on user well-being.
Moreover, these regulations emphasize the importance of algorithm reviews, robust data protection protocols, and comprehensive content limitations. The latter prohibits any material deemed harmful to national security or that propagates violence, rumors, or pornography. This holistic approach underscores the importance of ethical considerations in AI’s development and deployment.
Indonesia is charting a related, yet distinct course concerning AI governance. The government is in the process of finalizing a presidential regulation that outlines a national AI roadmap, incorporating ethical guidelines adaptable across various sectors, including healthcare and finance. Deputy Minister Nezar Patria has articulated that sustainability will serve as a cornerstone, highlighting that “AI must be developed with consideration for its impact on humans, the environment, and all living creatures.” This perspective is vital as the nation looks to integrate AI responsibly into its socio-economic fabric.
But why should emotional chatbots and environmental concerns be linked at all? The escalating AI boom is heavily reliant on a vast infrastructure of data centers that consume staggering amounts of electricity and water. The International Energy Agency (IEA) estimates that data centers currently emit around 180 million tons of CO2 annually. If consumption trends continue unchecked, this demand could more than double by 2030. Although AI workloads currently account for a fraction of this total, their growth trajectory is troubling.
Research led by Alex de Vries-Gao paints an alarming picture: AI systems could soon generate a carbon footprint comparable to that of New York City, consuming as much water as all bottled water consumed globally in a year. Additionally, a United Nations-supported analysis forecasts that global demand for AI could lead to the consumption of 4.2 to 6.6 billion cubic meters of water by 2027—representing a level akin to Denmark’s annual water withdrawals.
To further elucidate this issue, consider that a single medium-sized data center can consume as much water in a year as approximately 1,000 households. Larger facilities could rival the water needs of small cities. As society increasingly turns to AI for various functions—be it entertainment, work, or information—the cooling requirements of these server farms become substantial. This often translates to using freshwater resources that could otherwise support agricultural needs or households facing seasonal droughts.
The consequences extend beyond immediate resource use. The surge in AI is exacerbating a global shortage of memory chips. As manufacturers hurry to produce high-bandwidth components needed for AI servers, they divert capacity away from other electronics, including smartphones and laptops. Industry analysts predict that this AI-driven strain will lead to increased prices for consumer electronics and could persist well into 2027, affecting everyday consumers.
For many individuals, the environmental impact of AI may first manifest as heightened costs for their next tech devices and subsequently as the growing challenge of electronic waste, as obsolete hardware is discarded sooner than expected. This cycle of consumption raises serious questions about sustainability in technology.
In light of these challenges, Indonesia’s focus on ensuring that technology does not lead to human “enslavement” offers an essential ethical framework. The Indonesian AI roadmap aims to guide technology deployments in key sectors such as healthcare, education, and smart city initiatives, insisting on standards for accountability, transparency, and respect for intellectual property rights. If successful, this paradigm shift could leverage AI to assist farmers in adjusting to climate variability or help urban planners reduce emissions—not simply fostering passive screen time.
Conversely, China’s draft regulations target the potential risks associated with emotional companion applications, which can be increasingly available and inviting, particularly during late-night hours. Regulators express concern that this dependency could adversely affect mental health and lead users into detrimental behaviors. The proposed regulations would require providers to monitor user emotions, flag risky actions, and avoid manipulative designs that may perpetuate compulsive engagement.
Together, these emerging policies reflect a growing understanding that AI is not purely a virtual or abstract technology; it is fundamentally a tangible industry that exerts pressure on power grids, water supplies, and finite resources, while increasingly competing for human attention. Most experts suggest that robust regulations, enhanced transparency, and explicit environmental goals are essential for ensuring that AI contributes to climate solutions rather than exacerbating existing problems.
The underlying question remains straightforward: Do we want AI systems that exhaust resources and inflate prices, or do we prefer technology that promotes efficiency, environmental protection, and human oversight? Countries like China and Indonesia are setting proactive legal frameworks to answer this critical query.
The study was published in Patterns.