Introduction
Introduction
Artificial intelligence represents one of the most significant innovations since the advent of the first iPhone.
 
Since neural networks gained popularity, tools based on this technology have been the subject of intense advertising and hype, growing at an unprecedented rate. With the spread of these tools, often released to the public almost free of charge, the first consequences have also emerged. 
Fears are beginning to arise of a slow loss of human creativity and originality, and, concomitantly, a disruption of social interactions, given the growing popularity of increasingly sophisticated and customizable chatbots and the immense amount of news circulating in various local and online newspapers. The effects of AI on people have become a central topic in the media, generating widespread concern.
However, these may not be the worst problems facing AI. In a hypothetical scenario called "AI 2027", written by Daniel Kokotajlo, Eli Lifland, Thomas Larsen, and Romeo Dean, which proved to be the closest to our reality, artificial intelligence is predicted to reach the stage of AGI (Artificial General Intelligence) by 2027.
This would allow it to surpass the human race in every intellectual field and skill, even reaching the point of being involved in government decisions.
Many believe this will not happen, citing the limitations of new "SOTA" (state-of-the-art) models like GPT-5 to demonstrate that current AI is not as intelligent and capable as marketing and statistics would suggest. The underlying problem, however, is another: GPT-5 still relies on Transformer (Generative Pre-trained Transformer) technology. We cannot know when, or if, a radically new technology, a different "engine," will be developed that could give AI significantly superior understanding and reasoning capabilities.
AI is NOT evil.
Artificial Intelligence is only a tool, by itself it is not dangerous. It is the way people use that makes it dangerous. That is also one of the most simplified yet powerful conception of all discussions about AI.
Just like a knife is helpful to prepare a tasteful meal, it is also able to injure and end other living being's life. 
The existence of Artificial Intelligence for us is not itself a bad thing, but actually a good tool to help everyday life and society improve, with examples such as implementing prediction algorithms and Large Language Models to allow more natural interaction with everyday devices.
It also allows developers to create newer content and simplify some tasks for scripts and programs that before could take hours, like analyze text inputs as commands from the user itself.
The use of Image Generation can allow data scientists and Operating Systems Developers to seek new ways to store files with lossless data, but also help researcher in any field of science (such as medicine, geology etc.) to improve scientist's work by running complex prediction algorithms.
The threat for us
The actual threat for humanity, in Artificial Intelligence, is it's growth. Currently, AI Development is a world-wide competition where whoever develops the smartest AI "wins" the other companies investors, and therefore makes more money. The issue here is that Artificial Intelligence might become too reliable to use, too power hungry, too intelligence, and if what companies are promising, which is AGI, is actually going to happen, AI will become sentient, and work without human input.
This means that the machine will think by itself, take its own decision, and so if instructed in the wrong way, they can become the ultimate multitool to do anything. If instructed to kill, the AGI wouldn't hesitate in front of anything. 
Not only that, but we might see the growth of AI and AGI as a whole new species, as the robotic industry is also making impressing improvements, with companies like Unitree and Figure showcasing and even publishing accessible and capable robots.
With AGI implemented in robots, we might see a new stronger, smarter, more capable species that might surpass us and even, in the worse case, replace us.
Another threat coming from AI's growth is Generative Artificial Intelligence, more specifically video and image generation.
The seamless and simple usage of these tools make replacing professionals with cheap but powerful AI a blind choice. Video makers and even film production companies are at risk because of how quickly AI is getting more precise and accurate. 
Social medias are already filled with this kind of slop videos and images, with Facebook nearly seeing the death of original content.
Artists and publishing houses have taken this problem very seriously, protesting and filing lawsuits against AI companies for infringing on copyright laws by training their models on copyrighted images without permission. However, none of this has been enough, just as the lawsuits from book authors and publishers who have sued corporations like OpenAI for using their books and texts to train ChatGPT have not been enough.
Another issue with Generative AI is how easy it has become to create deepfakes: Images, videos or audios of real existing people saying or doing what the user prompts the machine, creating disinformation, and making scamming easier then ever. It is horrifying and disgusting to even think about a clueless mother sending thousands of dollars to scammers because their "children have sent her a video of themself being trapped by some criminals", or at least thats what the AI video wants her to think.
Weak points in AI
Right now, the weakest points in all the AI development and research is, at the end of all, budget.
Like we saw in early ChatGPT's days, where the servers were costantly overloaded, or with Google cutting availability within the free tier of Gemini's API usage, cost is not a joke to ignore when talking about ai development. Neural Networks are extremely complex algorithms that require an intense power consumption, both for the processing and the cooling of servers and datacenters.
Companies like Anthropic, Google, OpenAI or Meta are spending bilions of dollars in electricity and datacenter construction, because of these neural networks. Sam Altman, the CEO and founder of OpenAI has publicly announced that its willing to invest even trillions of dollars to build newer versions of ChatGPT, while Meta has already created a specialized team to build SuperIntelligence, while destroying thousands of square kilometers for datacenters and servers infrastructure.
Where are all these companies taking money from? Investors, loans, advertising.
What attracts investors, and makes good advertising? Shiny new products.
If companies are not able to sustain the demand of everyone, servers crash, making using AI less shiny and more like a frustration. What makes AI even less shiny and attractive? Less precise and intelligent models.
 How do we do that?
 There are many ways that people are already doing to make this happen. Artists are using methods to “poison” their images so that crawlers and AI training software is picking up bad data to train AI.
.. this section needs to be reviewed.
What can you do?
One of the many ways to proactively slow down AI development is by wasting corporate money.
Here's a table of all the ways you can pick, to slow down AI development.
Google Docs
Those are methods crafted by me (unless specified), but you can always submit your own ideas to Stop AI development, sending them at: 
[email protected]
Thanks you for reading this. These are all actual thoughts on Artificial Intelligence, and it's not here to influence you but to inform you.
I wish all the best, to any human living on earth, and I am hoping for a better world, where kids don't have to see parts of humanity consumed by corporate greed every single day. I want to make the world a good place for my children, if I will ever have any, to live.  
Please take care of yourself,
- Uzif, colloquially Chroma.