After Sam Altman was fired from the board of OpenAI, the parent company of ChatGPT, on November 17th, there was uncertainty about the future of the company. However, just three days later, after he returned to the board in a much stronger position, the debate on artificial intelligence regulation has become even more heated.
Is Microsoft Becoming the Monopoly of Artificial Intelligence?
Behind the scenes of the struggle in OpenAI’s board of directors, it was discussed that there were two different poles of artificial intelligence regulation, and it was revealed that Altman’s point of view, which had a maximalist position for the commercialization of ChatGPT, would now direct the company. Microsoft, OpenAI’s largest shareholder, was involved in the management crisis at the highest level through its CEO Satya Nadella. This intervention shows Microsoft’s growing influence over the company, both in terms of the board of directors and OpenAI’s other shareholders.
Microsoft owns 49% of OpenAI’s shares and, following Altman’s return, Microsoft was entitled to appoint three new observer members to the board. In the wake of Microsoft’s increasing control over one of the AI revolution’s leading companies, the US Federal Trade Commission and the UK Competition and Markets Authority have begun investigations into a possible antitrust investigation against OpenAI.
However, based on the precedent-setting decisions of the competition authorities, experts say that the situation created by the “Altman crisis” does not resemble a full-blown monopoly situation and that Microsoft will probably not be subject to an antitrust investigation “for now”.
First Step from the EU in Artificial Intelligence Regulation
This month, the European Union (EU) took a historic step in the regulation of artificial intelligence. As a result of long negotiations between the European Commission, the European Parliament and EU member states, the world’s first comprehensive draft law for the regulation of artificial intelligence was approved on December 8, 2023. According to the new law, AI applications will be categorized according to their risk level and regulated according to these categories.
The biggest point of contention in the negotiations was the collection and analysis of biometric data, which was categorized as one of the unacceptable risks. The new law completely bans biometric categorization systems that would enable facial recognition by artificial intelligence and the creation of biometric databases using human face data from the internet or security camera footage. European lawmakers have set as their main goal the creation of a free and democratic “European way”, a “European way” defined by law to ensure that European citizens can move freely in public spaces in the age of artificial intelligence. In particular, they have emphasized the need to avoid resembling a Chinese-style surveillance state model.
However, there will be exceptions to the use of biometric data for law enforcement, the military, intelligence services and institutions conducting legal investigations. AI technologies and biometric data can be used by these institutions to identify both victims and perpetrators in operations against terrorist and criminal organizations and in the fight against sexual crimes, human trafficking and armed crimes.
The new law also prohibits emotion recognition applications in workplaces and educational institutions, social scoring applications based on social behavior and personality traits, and artificial intelligence applications to manipulate human behavior as other unacceptable risks.
High-risk AI applications will need to fulfill various obligations and obtain approval in order to enter the markets of EU countries. Products such as autonomous vehicles, toys and medical equipment fall under this classification. However, EU officials state that it is vital that companies working in the field of artificial intelligence carry out a delicate balance so that they do not have difficulty entering the European market due to the approval mechanism.
Generative artificial intelligence applications such as ChatGPT, Copilot, Bard, DALL-E, Midjourney are categorized as limited risk AI applications. These AI systems must clearly indicate that the content they produce for their users in the European market is generated by artificial intelligence. In addition, companies will have to be transparent about how illegal content is blocked, how the AI is trained and which copyrighted works are used in the production process.
The details of the draft law will be determined by the work to be carried out between the European Commission and the European Parliament in the coming period. The artificial intelligence law is planned to enter into force in 2026. However, due to the sectoral weight of Mistral and Aleph Alpha, the most prominent companies in Europe in the field of artificial intelligence research, France and Germany are expected to have a guiding influence in the regulatory work.
Last November, the “Bletchley Declaration” signed by 28 countries, including the US, China and EU member states, set out an international work plan for AI security and regulation. Although the Declaration aims at international cooperation rather than legislation, the summits planned to be held in South Korea and France in 2024 under the name of “Artificial Intelligence Security Summit” will also shape the direction AI will take.
Tesla and Labor Unions Struggle Continues
Electric car giant Tesla has been trying to increase its share in the European market in recent years. The Tesla giga factory in Berlin-Brandenburg, which opened last year, is one of the largest car production facilities in Europe with more than 12,000 employees. However, the poor working conditions at the factory and Tesla’s refusal to enter into collective bargaining agreements with labor unions caused great controversy. Tesla promised to improve working conditions and raised employee wages. However, contrary to the traditional employer-union relationship in the German automotive sector, Tesla continues to refuse the right to collective bargaining.
IG Metall, Germany’s and Europe’s largest labor union, announced that it will accelerate its organizing in Tesla factories, stating that Tesla pays 20% less than the industry average. In Germany, where the automotive sector is the largest industry, Tesla wants to increase its investments in the coming years and become the biggest name in the sector in the country. But its struggle with broad-based union organization seems to continue.
Tesla is subjected to similar criticism in Scandinavian countries. Although Tesla does not have a production facility in the Nordic countries, electric vehicles have long surpassed fossil fuel vehicle sales in Sweden, Norway, Denmark and Iceland. Tesla remains the most popular electric vehicle brand in these countries.
But automotive workers from IF Metall, Sweden’s largest industrial union, went on strike last month in factories supplying Tesla because of Tesla’s refusal to negotiate a collective agreement. The strike in Sweden continues to grow with strikes against Tesla in various sectors in Norway and Denmark. In addition to automotive workers, postal services, port workers and communications companies have decided to stop all Tesla-related transactions and deliveries.
Wealth and insurance funds, which are major players in the Nordic economies, have also called on Tesla to respect workers’ rights, especially collective bargaining. Especially the fact that the Norwegian Wealth Fund is Tesla’s 6th largest shareholder seems to be an interesting case in terms of production relations for the transforming automotive industry despite Tesla’s sectoral leadership in Northern Europe.