ai - legal insight

#31 Power, Platforms, and Algorithms: What the Digital Services Act Means for Europe’s AI Future

#31 Power, Platforms, and Algorithms: What the Digital Services Act Means for Europe’s AI Future

The Digital Services Act (DSA) is one of the most significant regulations to emerge from the European Union’s digital strategy in recent years. This legislation redefines how online platforms must operate, with a strong focus on accountability, transparency, and user protection. But beyond the buzzwords, the DSA carries real implications for digital services and the development of artificial intelligence (AI). This article aims to offer a clear and structured overview for young professionals and AI enthusiasts alike. It explores the origins of the DSA, its scope of application, its key enforcement mechanisms, and why it is central to the future of ethical tech governance in Europe.

A Brief History of the DSA

The DSA is a central component of the European Union’s Digital Services Package. It was proposed by the European Commission in December 2020, adopted in October 2022, and became fully enforceable as of February 17, 2024. Its purpose is to modernise the regulatory framework that had been governed since 2000 by the e-Commerce Directive, a piece of legislation that predated the rise of social media, online marketplaces, and algorithmic decision-making.

As the digital landscape evolved rapidly, the limitations of the old rules became apparent. The directive was based on the assumption that intermediaries were largely passive conduits of information. However, platforms today actively curate content, use recommender systems, and monetise attention through advertising. This transformation demanded new legal safeguards.

The DSA emerged not in isolation but in response to a series of public and political concerns, including the spread of disinformation, the sale of illegal goods, online harassment, and the lack of meaningful recourse for users harmed by platform decisions. The Cambridge Analytica scandal, increased scrutiny of big tech’s market power, and misinformation during the COVID-19 pandemic all contributed to a political climate that demanded stricter oversight.

Scope and Applicability

The DSA applies across all EU member states and affects a wide range of digital services, including internet intermediaries, hosting providers, and online platforms. The obligations it imposes are tiered depending on the size and societal impact of the service provider.

Intermediary services, such as internet access providers and domain registrars, must fulfil basic requirements like cooperating with authorities. Hosting providers are obligated to act on notices of illegal content and provide user-friendly complaint procedures. Online platforms, which include social networks and marketplaces, face enhanced obligations. They must publish transparency reports, offer dispute resolution systems, and verify the identities of professional users.

Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs)—defined as services with over 45 million users in the EU—are subject to the strictest set of rules. These include regular risk assessments, mandatory external audits, and a duty to share data with vetted researchers.

In practical terms, this means companies like Meta (Facebook and Instagram), Google, Amazon, and TikTok face extensive obligations under the DSA. Their business models often rely heavily on algorithmic targeting and content moderation at scale, areas the DSA now directly regulates.

The DSA also applies to non-EU providers if their services are accessible to users within the EU. This extraterritorial scope ensures that European standards apply globally, setting a high bar for compliance. It mirrors the global impact of the GDPR, which became a de facto global standard for data protection.

Relevant articles include Article 3(g), which defines EU-based service recipients; Article 11, which mandates legal representation for non-EU entities; Article 14, which requires accessible and clear terms of service; and Article 24a, which obligates platforms to implement crisis protocols in times of societal instability or information disruption.

Who Enforces the DSA—and What Happens If You Don’t Comply

Enforcement of the DSA is split between national and EU-level authorities. Each member state must appoint a Digital Services Coordinator responsible for overseeing compliance within its jurisdiction. These authorities coordinate investigations, respond to user complaints, and may issue legally binding decisions against service providers operating in their territories.

For the largest platforms and search engines, the European Commission assumes a supervisory role. This centralisation ensures a consistent regulatory approach across the EU and prevents forum shopping by large tech firms looking for more lenient jurisdictions.

Failure to comply with the DSA can result in significant financial penalties. Companies may be fined up to six percent of their global annual turnover. In serious cases, authorities can impose operational restrictions or require urgent remedial actions, such as removing a service or functionality until it complies with EU law.

The law also includes provisions for transparency and oversight. Platforms must provide access to internal data to regulators and submit to independent audits. These mechanisms ensure that authorities are not dependent on voluntary disclosures or unverifiable self-reporting.

Importantly, the DSA creates legal duties for “trusted flaggers”—entities that are granted special status by authorities due to their proven expertise and track record in identifying illegal content. Platforms must respond with priority to notices submitted by these trusted flaggers, enabling more effective enforcement of rules against hate speech, fraud, and illicit content.

The DSA in Context: Why It Matters

The DSA is not a standalone regulation. It is part of a broader legislative framework that also includes the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and the upcoming Artificial Intelligence Act. Each of these instruments addresses a specific dimension of the digital economy. The GDPR focuses on data privacy. The DMA tackles market dominance and anti-competitive practices. The AI Act aims to regulate the use of AI systems based on risk.

The DSA’s unique role lies in its governance of digital spaces themselves. It ensures that platforms are accountable for the content they host, the systems they use to recommend or remove content, and the way they respond to crises. In doing so, it bridges the gap between platform responsibility, user rights, and democratic safeguards.

This matters because online platforms are not just neutral spaces. They influence public discourse, affect mental health, and play a growing role in shaping democratic participation. By setting clear rules for how these spaces are managed, the DSA contributes to a safer, fairer, and more transparent digital environment.

It also strengthens the position of users. For example, users now have the right to contest content moderation decisions through an internal complaint-handling system and, if necessary, via out-of-court dispute resolution bodies. Terms and conditions must be explained in clear and comprehensible language. This shift toward accessibility and fairness reflects broader EU efforts to put users—not just companies—at the centre of digital policy.

The DSA and AI: Regulating Use, Not the Tool

Although the DSA does not directly regulate AI as a technology, it has significant implications for how AI is used within digital platforms. Many platforms deploy AI for content moderation, ranking, recommendation, advertising, and user engagement. The DSA targets these use cases by imposing duties related to transparency, risk mitigation, and accountability.

For example, Article 27 requires platforms to explain the logic behind their recommender systems. These systems are often AI-powered, relying on vast amounts of data to suggest content, friends, or products. Article 34 obliges very large platforms to assess and address systemic risks, including those caused by the misuse of AI. This may involve disinformation campaigns, addictive design patterns, or discriminatory advertising.

Article 37 mandates independent audits to evaluate whether platforms are meeting their risk mitigation duties. These audits include reviewing algorithmic decision-making, which often lies at the heart of content ranking and moderation processes. If AI systems are found to contribute to harms such as misinformation, discrimination, or privacy violations, platforms must act promptly to mitigate these issues.

This creates a regulatory environment where the use of AI is not restricted outright, but must be deployed responsibly. Transparency, user rights, and ethical design are not optional add-ons—they are legal requirements.

The DSA also indirectly intersects with the AI Act, which is expected to categorise AI systems by risk and impose stricter requirements on high-risk systems. Together, these laws form a two-pronged strategy: one regulates the environments in which AI operates (DSA), while the other regulates the systems themselves (AI Act).

A Coordinated Effort Toward Ethical Innovation

The DSA fits into a larger movement within the EU toward a cohesive digital policy framework. Alongside the GDPR, DMA, and AI Act, it reflects a vision of digital transformation that is grounded in rights, ethics, and democratic values. While these laws present compliance challenges, particularly for smaller firms, they also set a clear path toward responsible innovation.

The strength of this approach lies in its balance. Rather than blocking technological progress, the EU’s digital laws seek to shape it in a way that maximises benefits while reducing harm. In this context, the DSA becomes a foundational tool for building trust in the digital economy.

It also represents a strategic move on the global stage. By establishing high regulatory standards, the EU is positioning itself as a leader in digital governance. Just as the GDPR influenced data protection laws around the world, the DSA could serve as a blueprint for other jurisdictions seeking to regulate digital services in a rights-based manner.

Final Thoughts

The Digital Services Act is a landmark piece of legislation that will influence not only the European digital space but potentially set global standards. It redefines what is expected from online platforms in terms of transparency, user protection, and algorithmic accountability.

For developers, legal teams, and digital entrepreneurs, the DSA represents both a challenge and an opportunity. Those who adapt early and design with compliance in mind are likely to gain a competitive advantage in a rapidly evolving regulatory landscape.

Understanding the DSA is not just a matter of legal literacy—it’s a strategic necessity for anyone building or operating digital services in Europe. And in a time when trust in tech is under pressure, delivering ethical, transparent, and compliant platforms may be the best investment of all.

Stay curious, stay informed, and let´s keep exploring the fascinating world of AI together.

This post was written with the help of different AI tools.

Check out previous posts for more exiting insights!