“My passion lies in ensuring transparency and accessibility of models”, Joëlle Pineau, Meta FAIR

L'Usine Digitale: In the first part of this interview we talked about the Meta AI conversational assistant, present on the group's social networks and applications. What do you think of the EU's ban on letting Meta train its models on European data, and therefore deploying its general AI services there?tive?

Joëlle Pineau : My expertise is really the development of models and I rely on all my collaborators who have a much finer understanding of regulatory issues. There are choices to make and it is normal for each society to position itself in relation to these choices. Because if artificial intelligence moves quickly and it poses all kinds of challenges, we try to understand the issues as best and as quickly as possible. This effort to make research open, available to all, is one of the ingredients to allow us to better know how to dose.

Advertisement

Obviously, we want to protect people's privacy. Obviously, we want models that are developed responsibly. We also want to encourage innovation. There are enormous social and economic benefits that will be derived from artificial intelligence.

We should also not close the door on how we position ourselves in relation to all these questions. This is where you have to find the right balance. My main focus is transparency and the availability of models because with more transparency, we can better inform this debate. It's a social debate. This is not just a debate to be had between governments and big business.

Do you think that the EU is restricting or even stopping innovation in AI with its new regulations?

It's a bit strong to express it like that, but I think that when we limit the availability of models, it can limit innovation. This is a question that belongs to Europe, it makes its own choices. We will obviously respect the rules in force in all the geographies where we operate. However, it is easier to release certain products quickly in the United States today.

Advertisement

Do you think that in the long term, there will ultimately be products that would only be released in the United States and others that would only be released in Europe for these reasons?

I think so, that's definitely what we need to consider, depending on how the laws evolve.

You mentioned Meta's “open science” approach earlier. However, open source models do not generate much profit. Could the firm change its mind in the future?

We continually re-evaluate our position for several reasons. On the one hand, there is a whole discussion about the models that we call “frontier models”. You should know that there are all kinds of requirements that are currently being studied. Our position on this subject is based on the analysis of the potential risks of these models. This has already happened to us not to release certain models in open mode because we found that the risks were too great, in particular models which use voice synthesis, which are capable of reproducing someone's voice and which can be used for cases of abuse.

Our position is always re-evaluated with each release. Each of the models we are sharing today has been evaluated to ensure there were no risks or that the risks were well understood. This is something we will continue to do. The company is doing well, we have all kinds of other ways to make profits that keep the company financially viable. We do not necessarily need to rely on a commercialization of these models, we must remember that we are at the very beginning of the commercialization of AI.

There is a lot to learn about how to do things well in terms of model development and accountability. And then, our open approach allows us to learn much more quickly, in particular to call on researchers for contribution and collaborate with them to ensure responsible, secure and rapid development of this technology.

Selected for you

Advertisement