Going slow
Generative AI is getting better and better. So good that some speak of emergent properties. Of language models building a world model. We are all asking ourselves how we can solve the alignment problem: How do we ensure that AI works for and not against humans and that we can trust that development is going in the right direction? It is not just about technical progress, but rather about the normative framework conditions that must accompany it.
Trust is based on transparency, control and clear ethical standards. These principles are essential in order to establish a technology not only as progressive, but also as trustworthy. In the context of AI, we are faced with the challenge that many systems function as "black boxes" whose functionality and decision-making principles are hidden. This lack of transparency collides head-on with the need for traceability and integrity.
The call for regulation of AI is a call for orientation and security. Regulation can be seen as a signpost that does not limit innovation but guides it along ethically responsible paths. It is important to find a balance that both drives development and sets ethical standards. Bad guys will never take regulation seriously and it would be counterproductive if the good guys were to slow down development while the bad guys uninhibitedly push the boundaries of what is possible. That is why regulation can only set the standards, but not solve the whole problem.
In addition to the normative framework, AI needs transparency and control over data: Even if LLAMA, for example, is not as powerful as GPT4 - it is open source and can therefore at least be viewed transparently. And blockchain technologies open up new perspectives for transparency, control and decentralisation of data and models. They represent a synthesis of technological innovation and ethical responsibility that paves the way for trustworthy AI. We ourselves decide whether to prioritise the good guys or the bad guys.