Beyond LLaMA 2: AI for the broadcast & media sector must come with ethical principles to safeguard society - APB+ News

APB+ News

Beyond LLaMA 2: AI for the broadcast & media sector must come with ethical principles to safeguard society

By Dr Amal Punchihewa

Driven by the hype surrounding generative AI and an increasing focus on efficiency, AI has now grown to be top of the priority list for broadcasters, with many claiming mid-90% accuracy in the use of AI and Machine Learning (ML) for closed captioning, script and data generation.

Engaging with technologies such as AI without leaving anyone out, it is time to act responsibly, ethically and sustainably to serve our broadcast and media audiences.

ITU (International Telecommunication Union) says AI has emerged as a ground-breaking technology with the potential to reshape societies and economies. Its applications range from autonomous cars and voice-activated assistants to complex diagnostic tools in healthcare and personalised learning systems. However, along with its immense benefits come several challenges and risks that necessitate a broad global dialogue.

Before we understand ChatGPT and Generative AI, it is better to know what AI and chatbots are.

AI refers to the emulation of human intelligence in machines such as computers that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. AI is a branch of computer science that aims at creating intelligent technology capable of replicating human learning and problem-solving skills.

Just a few years ago, discussions of data ethics and AI ethics were reserved for non-profit organisations and academics. Today, large tech companies like Microsoft, Facebook (Meta), Twitter, Google, and other enterprises are urged to deploy teams to tackle the ethical problems that arise from the widespread collection, analysis, and use of massive amounts of data, particularly when that data is used to train ML models or AI engines.

Researchers have already carried out the essential groundwork on the principles of responsible AI, which continue to hold immense relevance, especially as we enter the era of Generative AI.

Companies and countries are rolling out high-level AI ethics principles. Google and Microsoft, for instance, communicated their principles years ago. The difficulty comes in operationalising those principles.

AI technologies like ML and Natural Language Processing (NLP) are being employed in the broadcast industry to automate processes and manage the sheer volume of data being generated by over-the-top (OTT) platforms and traditional linear broadcasters.

AI is being used to automatically generate metadata, intermediate frames and super slow-motion from regular camera feeds, analyse audience behaviour patterns, provide speech-to-text, refine complex workflows, and many more applications.

AI can be used to learn whether technology can help broadcasters uncover hidden audience preferences across genres and programming those broadcasters may assess what their audiences might like. It will also help broadcasters look at the limitations and challenges around using this technology to support and enhance their content curation and delivery. This should be done with the intention of not replacing broadcast schedulers.

Meta announced an open-source AI known as LLaMA 2 on 18th July 2023, Meta’s first large language model that is available for anyone to use for free. But many questions still remain. Meta is not releasing information about the data set that it used to train LLaMA 2 and cannot guarantee that it did not include copyrighted works or personal data, according to a company research paper shared exclusively with MIT Technology Review.

It is essential to consider future risks in AI and the importance of AI ethics and governance towards humanity and sustainability. 

Fake news has already posed challenges by bringing distrust towards media, politics and established institutions around the world. While AI might make things even worse, can it also be used to combat misinformation?

Studios are at the forefront of AI adoption, and we have often cheered the realistic recreations from Jurassic Park to Avatar.

Computer-generated imagery (CGI) to create virtual actors and extras has been in use in the film industry for some time. Using CGI to generate virtual performances has been done in the movie industry for many decades, producing crowd shots in sports movies and warriors in battle scenes. But just as computers replaced animation artists who used to draw 24 frames of artwork for each second of film, AI allows much easier, and cheaper, use of CGI to generate performances by actors who are not there. Thus performers (actors) fear that the studios want to use AI to eliminate their acting jobs.

Using AI to create performances that never took place is not hypothetical, it is already happening. AI-generated deep-fake videos are convincing, even though they are totally fabricated videos.  

The actors union in the US which has had 160,000 members on strike since the second week of July 2023 is afraid that AI will lead to far fewer employed actors in the future.

Experts say whatever deal is reached on AI is not going to lead to an outright ban on its use to create virtual actors in films and movies. More likely it will set up rules for its use, and a compensation minimum for actors whose voice or image are manipulated and inserted using AI.

As AI technology is advancing so rapidly, it is very hard to formulate a meaningful set of details, best practices and guidelines that will not be out-dated in a short time.

AI has been advancing rapidly and now it is generating a hurried, excited, and disorganised competition between tech giants to dominate the market and its profits. The priority is on profit and market dominance but not on integrity and the safety consequences.  

Tech leaders believe AI is growing too fast — and fast enough to have negative impacts on society and humanity. Inherently, AI is not without its own set of problems. So the first would be the way it is used. Facial recognition to open your phone sounds great. But what happens when the same technology is used to spy on you?

The other problem is bias. You would expect machines to be neutral. However, they are directed by us and as humans, we are inherently biased. So the machines we make with a bias, AI almost amplifies the existing biases in society. AI is as good as the data that is fed; if the data is biased, so will the machine be.

While AI technology and products could help humans in many areas, it lacks privacy, ethics, explainability and responsibility. Due to these potentially harmful reasons, there are discussions around its governance and regulation.

For example, ChatGPT has been banned in Italy. ChatGPT could return if its maker, OpenAI, complies with measures to satisfy regulators who had imposed a temporary ban on the AI software over privacy concerns.

Compare AI with social media. We thought of the regulation of social media only after things went out of control. AI too can be a double-edged sword that can be used for both good and bad.

The risks of using AI often capture the headlines, and the potential for misuse of the technology has led to justifiable apprehension. However, it is essential to remember that these risks should not overshadow the transformative positive potential of AI.

The emerging principles for the ethical use of AI are transparency, accountability, avoiding bias, ensuring fairness, and maintaining security. However, particular emphasis needs to be placed on promoting inclusiveness, along with different aspects of that, from inclusive data sets to the ability to use AI.

Inclusiveness extends to providing equitable access to AI’s benefits, regardless of one’s geographical location, and addressing the digital divide as 2.7 billion people are still not even online.

Dr Amal Punchihewa is an ITU expert and advisor/consultant to the Asia-Pacific Institute for Broadcasting Development (AIBD), and was formerly Director of Technology & Innovation at the Asia-Pacific Broadcasting Union (ABU).

Subscribe to the latest news now!

 

    Scroll to Top