The new DMCC Bill: What could the CMA do

What is the DMCC Bill and why is it interesting?

The oligopolistic rise of Big Tech, LLMs and the dangers around the use and misuse of our data, have clearly been an area of increasing attention for governments around the world. The EU has been the leader in creating legislation in this space, starting with GDPR to more recent developments such as the Digital Markets Act, which specifically looks at the leading online platforms. And as of 2021, the FTC under Lina Khan has also taken a much closer interest in Big tech, apparent in their attempts (with varying degrees of success) to block recent acquisitions by Meta, Microsoft and Nvidia.

Of course, since Brexit, the UK government has been promising new legislation, but has fallen behind the EU. The DMCC Bill finally published in April this year is the UK’s take on these issues, with 3 major sections, consumer rights, digital markets and competition - the latter 2 being the ones in the spotlight currently.

There’s a multitude of reasons that make this bill interesting, the UK is trying to find a balance between the business forward position of the US and regulatory heavy EU. This lead to a Bill, that while similar in goals to the EU’s DMA, and one armed with heavy financial penalties - aims to be “pro-competition”, and positive for the UK economy. This is evident while reading through the testimony of the heads of the CMA in the committee stage of the Bill, who particularly emphasised how flexible and positive the results will be, and will build a “foundation of an economy that encourages investment, growth and innovation”. It is also unique that it gives freedom for the newly established DMU (Digital markets unit) to tackle the big Tech giants (those who fall under its definition of having Strategic Market Status) with individually tailored regulation, rather than the one-size-fits-all EU Digital markets act.

The 3 specific actions that can be taken on firms with this status, are a legally binding code of conduct, PCIs (Pro-competition interventions) and stringent reviews of all mergers and acquisitions. The codes of conduct that the DMU can impose fall under 3 areas - fair dealing; open choices; and trust and transparency. I’m particularly interested in the codes and PCIs as they are very broadly defined in the Bill, and give the Government almost unlimited options. While there have been many reviews of the Bill by Law firms, summarising the legal powers granted by the Bill and how they may affect industry, I think there’s lots of interesting and specialised options that the DMU can take to actually try to influence the future of these platforms.

What is Strategic Market Status?

For a firm to have strategic market status, they have to have at least a billion pounds of revenue in the UK or 25 billion globally. This essentially whittles the field down to the biggest tech companies - before even coming to the next criteria, for the firm to have (link)

  • Substantial and entrenched market power, and
  • A position of strategic significance

While position of strategic significance is further detailed, this essentially ends up covering most of the main big tech firms, from Meta to Amazon, as they all fall under this category in some way, shape or form.

Problem space #1: Messaging Inter-operability (and inter-op in general)

The much talked about solution is inter-operability, similar to the one proposed by the EU’s DMA - which would mandate all large messaging apps to provide a standard API/interface which anybody can access. This would in theory allow people to message across platforms and apps, no longer would you need to download messenger simply because your network is on it, which in turn enables fair choice and startups in this space to flourish. However there are considerable problems with this, from trying to preserve end-end encryption to enabling groups, user discovery and preventing a proliferation of scammers. On top of that - the fundamental problem is that these platforms are often differentiated on features (for e.g the ability to react or reply to messages), and mandating inter-operability would hamper these differences, as feature parity would not be possible across different apps. All these challenges require a tremendous amount of cooperation between these firms, and a lot of technical work, albeit it seems like the EUs DMA is determined to push for a solution here (though not at the cost of privacy). The approach also comes with downsides, as it only asks for the most basic feature support and may put the burden on the competing platform to integrate, which could lead to very few integrations.

Instead - It would be great to see the DMU tackle it from a different angle and instead push the ‘gatekeepers’ to work with a specific few startups or perhaps the government itself, to develop inter-operability between a few specific services. This more targeted intervention would force the firms to invest in inter-operability and come up with solutions to the above problems, but also allow the government to assess the progress via testing in real time. This approach, rather than mandating it all at once, can balance out the risks and difficulties, while also ensuring that investment and advances in this area continue to happen, paving the way for eventual full inter-operability. This would mirror their approaches to tackling financial inter-operability, and those in the online advertising markets, specific interventions which achieve reducing platform power, rather than an industry wide approach which may not work out. This also extends to a lot of the practices such as Apple’s closed off app-store payment models, their use of lighting cable chargers that the EU has already gone after (and successfully as seen by the lighting to USB-C change in the IPhone 15). While mandating a standard is definitely effective in some cases, in other cases, a better approach might be for regulation that leads to a collaborative building process, rather than a hard switch.

Problem space # 2: Data collection standards

One aspect of ‘transparency’ which could definitely be improved on, is visibility of the data being collected by these companies. GDPR does a great job of enforcing rules around privacy, from the rights to demand deletion, to enforce proper consent prior to obtaining data, and specifically only collecting data for certain uses. However, one thing it does not do - is make this data collection more understandable to the public or easier to keep track of for a generic user. Requesting for your personal data from Facebook for example, returns you a dump of disparate data, from which it is very hard to make sense of ‘what Facebook knows about you’. One way to help improve this process might be for the government to create and release standardised formats of data, and mandate for Facebook, Google etc. to release information within these standardised formats. This will help startups, or the government themselves build tools to help the general public make sense of what data has been collected, for what use, and how they can request for that data to be deleted. Startups and tools like these - which help you request your data from big tech companies - example 1, example 2 would be so much more effective, and would standardise the process of access requests to be of actual value to society.

Problem space #3: Standardising the use of ML, Algorithms & Language

Extending regulation around data privacy, collection and reporting onto ML and automated systems is I believe an important next step. There are almost uncountably many cases where automated systems provide biased decisions, mostly due to biases in the training data, but also plenty of other factors. When openAI releases GPT4, they are hesitant to reveal what data it was trained on, or the model parameters, as it’s their trade secret. However, revealing aggregated statistics is unlikely to divulge trade secrets. The same way public companies have to report diversity statistics - we could mandate these companies to provide aggregated statistics on the data used for training. For example, if images of people are used, we could mandate the release of race and gender breakdowns of the training sets, if a model is used to make hiring decisions, we could mandate the release of demographics of the training set. While this won’t stop bias in AI, or in automated systems, its the first step towards properly understanding the background of these models. Another step could be to explicitly define and document the workflows in which they are used, so as to provide transparency to the public and regulators. There are quite a few opinions which fall along similar lines to these, for example, here is a proposed system of classification of algorithms, especially those which can cause harm, and which would then place reporting requirements on them. Extending liability around the use of automated systems would also achieve a similar effect, but in fact go further in ensuring that these systems do not get used at all, without lots of proper user testing to ensure no biases

Another interesting point for me is how Technology companies talk about their innovations - specifically the language we all use. Words are powerful, and there has been a significant amount of research to show how overusing the term ‘artificial intelligence’ leads to misconceptions among how the technology works in the public - with a good summary here. There is also now guidance from the Associated Press for how to talk about AI in journalism. I would love to see regulators tackle this under codes of conduct, asking the big tech firms to be standardised, and explicit about how they communicate their developments. Building out a new standardised vocabulary around these developments will help build trust in the community in general and bring the developers and technologists closer to the general public.

Problem space #4: Restrict compute - for multiple reasons

Perhaps the most radical intervention I’d like to see, is the restriction of compute time for the big Tech companies. Over the last 20 years, we’ve finally managed to implement carbon regulations, limiting the use of fossil fuels, taxing it, putting caps on the emissions a company can make. Data centres currently were estimated to use about 1% of the world’s total energy use in 2018, but that’s a number that can and definitely has skyrocketed, since the recent rise of transformers being trained on larger and larger datasets. Much like how we regulate and cap carbon emissions, it seems fair to regulate and cap the usage of compute time, especially around large language models, forcing companies to focus on efficiency, and levelling the playing field, especially for startups - who struggle to procure GPUs to train their models to compete with the big players. Perhaps this regulation first takes the form of regulating GPUs or specific types of compute usage - after all it isn’t really practical to regulate Google in how much compute they use to service answering queries - however for non user linked processes, it seems to be a fair way to increase competition, while also ensuring we keep a check on the energy usage of these behemoths.

To watch out for

They have been regulations that were rather ill-advised for example - the link Tax such as the one tried in Canada and Australia - as these authorities are local and do face limitations on their power. On its own, it might not be a big enough market to compel tech firms to follow its regulation, rather the firm may simply leave or disable the particular feature altogether in the UK, much like Google and Meta plan to do in the case of the Link tax in Canada (2). Therefore, proposed regulation has to make sense for all parties involved.

An important point to note is that Big tech firms generally lobby in favour of this ‘light regulation’, as they have the capacity to invest in teams to comply with these codes of conduct, unlike smaller firms. However, since the codes of conduct only apply to the large firms, these can be quite exacting without hurting competition.

There is also strong corporate flavour to a lot of research done on algorithmic fairness, ethics in AI and technology policy in general - with firms often funding this research, and researchers having close links with these companies/switching between research and corporate positions (A long and excellent feature on this by Rodrigo Ochigame is present here). Therefore, being wary of the sources of funding of any research before deciding to act on it would be a flag for any regulatory authority.