AI can be dangerous but not in the way people think

AI can be dangerous but not in the way people think

The honest answer is yes, AI can be dangerous. But the more useful answer is that it is not dangerous in one single, cinematic way. It is dangerous in uneven, context-dependent ways: through fraud, manipulation, bias, privacy violations, unsafe decisions, brittle automation, and systems that scale errors faster than humans can catch them. At the same time, most AI systems are not treated by regulators as inherently catastrophic. The European Commission’s own explanation of the AI Act says most systems pose limited or no risk, while certain uses create risks serious enough to justify bans or strict obligations. UNESCO frames the same issue in ethical terms, warning that AI can embed bias, threaten human rights, and deepen existing inequalities if it is deployed without strong safeguards.

That is why the question “Is AI dangerous?” is slightly too blunt. A spam filter, a coding assistant, an image generator, a hiring model, a surgical aid, a police surveillance system, and a persuasive chatbot should not be discussed as if they belong in the same moral category. The real issue is how much power the system has, how opaque it is, how widely it is deployed, and what happens when it gets something wrong. The NIST generative AI risk profile makes exactly this point in a more technical language, noting that generative AI can intensify existing AI risks, create new ones, and generate harms not only at the model level but at the ecosystem level, including effects on labor markets and creative industries. It also stresses that many risks arise from human behavior, misuse, and unsafe repurposing, not just from the model itself.

The danger depends on what the system is allowed to do

One reason public debate gets confused is that “AI” is used as a catch-all term for tools with radically different stakes. European law now reflects that difference explicitly. The AI Act uses a risk-based model, reserving the toughest treatment for systems that threaten safety, livelihoods, or fundamental rights. The Commission lists prohibited practices such as harmful manipulation and deception, social scoring, certain biometric practices, and real-time remote biometric identification for law enforcement in public spaces. It also identifies high-risk uses in areas like education, employment, critical infrastructure, essential services, and safety components in products such as robot-assisted surgery.

That distinction matters because it cuts through both lazy optimism and lazy panic. AI is not one thing, and its danger is not one danger. A model used to recommend songs may be annoying when it fails. A model used to screen job candidates or shape credit decisions can quietly distort life chances. The Commission notes that AI decisions can be difficult to explain, making it harder to determine whether someone was unfairly disadvantaged. That opacity is not a science-fiction problem. It is an institutional one.

The harms that are already here

The strongest case for taking AI risk seriously does not rest on speculative doomsday arguments. It rests on harms that are already visible. The OECD says media-reported AI incidents have increased steeply since November 2022, and its 2026 paper groups reported harms into themes including synthetic media, child safety, cyberattacks, privacy, and health. That does not prove every incident is equally severe, but it does show that AI risk is not hypothetical noise created by philosophers and headline writers. It is a growing operational reality.

Fraud and impersonation are among the clearest examples. The U.S. Federal Trade Commission warns that voice cloning makes scam calls more believable because people are more likely to act when a caller sounds like a family member, colleague, or executive. That is a perfect example of what AI changes most effectively: it lowers cost, raises scale, and makes deception easier to personalize. The danger is not that a model “wants” to trick anyone. The danger is that it gives bad actors a cheaper and more convincing tool.

Disinformation and reputational abuse are also very real. The International Scientific Report on the Safety of Advanced AI says general-purpose AI makes it possible to generate and disseminate disinformation at an unprecedented scale and with increasing sophistication. It also highlights deepfake abuse, including non-consensual sexual content and blackmail, and notes that watermarking and similar countermeasures can often be circumvented by moderately sophisticated actors. The same report is careful not to overclaim: it says the overall impact of disinformation campaigns remains hard to measure, and distribution may still be a bigger bottleneck than content generation itself. That nuance is important. The danger is real even when the exact scale of impact is still being debated.

Bias, privacy, and unreliable outputs remain central risks as well. NIST’s generative AI profile repeatedly flags harmful bias, confabulation, information integrity, data privacy, information security, and dangerous or hateful content as core risk categories for generative systems. UNESCO likewise places fairness, transparency, privacy, accountability, human oversight, and non-discrimination at the center of its ethics framework. These are not peripheral concerns. They are the basic reasons a system that looks fluent can still be unsafe.

Some fears are justified and others are still unsettled

Public argument about AI becomes less useful when every risk is treated as equally proven. The evidence does not support that. The international safety report backed by the UK’s AI Safety Institute draws a careful line between current harms and more speculative ones. On cyber risks, it says general-purpose AI could lower the barrier for malicious users and help automate or scale some offensive activity, including social engineering. But it also says there is no substantial evidence yet that current systems can automate sophisticated cybersecurity tasks in a way that clearly tips the balance toward attackers.

The same report takes a similarly measured view on biological misuse. It says current general-purpose AI systems do not present a clear current biological threat, and that the limited studies available do not show clear evidence that today’s systems uplift malicious actors beyond what they can already do with existing internet resources. Future risks remain uncertain, especially if general-purpose models become more capable and are integrated with specialized biological tools and automated labs, but the report is explicit that current evidence is limited.

The most dramatic long-term fear, loss of human control over advanced AI agents, is treated even more cautiously. The report says there is broad agreement among experts that currently known general-purpose AI systems pose no significant loss-of-control risk because their capabilities are still limited. It also says expert views diverge sharply on whether such scenarios are implausible, likely, or low-probability but high-severity risks worth preparing for. That is a better basis for judgment than either complacency or apocalyptic certainty. Some of the loudest fears are not baseless, but they are not established facts either.

What makes AI risky is scale, opacity and overreliance

The deepest risk is not merely that AI makes mistakes. Humans make mistakes constantly. The deeper problem is that AI can make mistakes at scale, in opaque systems, with institutional legitimacy attached to the output. That is why NIST warns about “algorithmic monocultures,” where repeated reliance on the same models in consequential settings can increase correlated failures across sectors like employment and lending. A single flawed pattern can travel further and faster when thousands of organizations rely on similar systems.

Overreliance makes this worse. The international safety report notes that if people increasingly entrust general-purpose AI systems with critical responsibilities, oversight becomes harder and risks can grow, especially in government, military, or judicial uses. The European Commission makes a closely related point in plainer language: it is often difficult to understand why an AI system reached a decision, and that makes it harder to contest unfair outcomes. Dangerous AI is not always the AI that breaks loudly. Sometimes it is the AI that slides into routine authority before anyone has built a serious system for challenge and redress.

There is also an environmental version of this problem. AI is often discussed as if its risks were only social or political, but physical infrastructure matters too. The International Energy Agency says data centres accounted for about 415 TWh of electricity consumption in 2024 and projects that demand to rise to roughly 945 TWh by 2030, with AI as the most important driver of that growth alongside other digital services. UNEP has separately argued that the full environmental impact of AI across its lifecycle needs comprehensive assessment. That does not make AI uniquely monstrous, but it does mean that “danger” includes resource strain, emissions pathways, and local infrastructure pressure, not just digital harms on screens.

The right response is not panic but hard limits

The smartest response to AI risk is neither denial nor melodrama. It is governance with teeth. UNESCO’s recommendation is built around human rights, transparency, accountability, privacy, fairness, safety, and human oversight. WHO’s health guidance argues that AI in medicine and public health must place ethics and human rights at the heart of design, deployment, and use. NIST’s framework is built around trustworthiness and risk management across the lifecycle. The European Union has translated that broad logic into a tiered legal model that bans some uses, heavily regulates others, and leaves low-risk uses relatively free.

That mix is more credible than sweeping slogans about either saving the world or ending it. AI becomes dangerous when capability outruns accountability. The systems that deserve the most scrutiny are not always the flashiest ones. They are the ones making decisions about jobs, care, identity, safety, public information, and civic trust. That is also why the best argument against panic is not blind faith in innovation. It is visible safeguards, auditable systems, strong reporting, meaningful human oversight, and clear limits on uses that should never have been normalized in the first place.

AI is dangerous, then, but not because every model is a lurking superintelligence. It is dangerous because powerful tools in weak systems usually are. The real test is whether societies are disciplined enough to distinguish convenience from legitimacy, automation from wisdom, and capability from permission.

Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency

AI can be dangerous but not in the way people think
AI can be dangerous but not in the way people think

Sources

Artificial Intelligence Risk Management Framework Generative Artificial Intelligence Profile
NIST’s official framework companion on the risks unique to or intensified by generative AI.
https://www.nist.gov/publications/artificial-intelligence-risk-management-framework-generative-artificial-intelligence

AI risks and incidents
OECD overview of AI incidents, risk monitoring, and the need for shared reporting frameworks.
https://www.oecd.org/en/topics/ai-risks-and-incidents.html

Trends in AI incidents and hazards reported by the media
OECD working paper analyzing patterns in reported AI harms across themes such as synthetic media, privacy, cyberattacks, and health.
https://www.oecd.org/en/publications/trends-in-ai-incidents-and-hazards-reported-by-the-media_4f5ff43c-en.html

International scientific report on the safety of advanced AI
Government-backed synthesis of current research on advanced AI capabilities, current risks, and areas of scientific disagreement.
https://www.gov.uk/government/publications/international-scientific-report-on-the-safety-of-advanced-ai/international-scientific-report-on-the-safety-of-advanced-ai-interim-report

AI Act
European Commission explanation of the AI Act’s risk-based model, prohibited practices, and high-risk categories.
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Fighting back against harmful voice cloning
FTC consumer guidance on AI-enabled voice cloning scams and why they are persuasive.
https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning

Ethics of Artificial Intelligence
UNESCO’s official ethics framework emphasizing human rights, fairness, transparency, accountability, and human oversight.
https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

Ethics and governance of artificial intelligence for health
WHO guidance on ethical and human-rights-based deployment of AI in health systems.
https://www.who.int/publications/i/item/9789240029200

Guidance on large multi-modal models in health
WHO publication focused on the implications of generative AI and multi-modal models in health care and research.
https://www.who.int/publications/i/item/9789240084759

Energy and AI executive summary
International Energy Agency analysis of AI-driven data-centre electricity demand and energy-system implications.
https://www.iea.org/reports/energy-and-ai/executive-summary

Artificial Intelligence end-to-end The Environmental Impact of the Full AI Lifecycle Needs to be Comprehensively Assessed
UNEP issue note on assessing AI’s environmental footprint across its lifecycle.
https://www.unep.org/resources/report/artificial-intelligence-ai-end-end-environmental-impact-full-ai-lifecycle-needs-be