Your Tech Story

AI

How to Detect AI-Generated Deepfake Videos?

How to Detect AI-Generated Deepfake Videos?

In an era where technology is advancing rapidly, the emergence of deepfake videos has become a significant concern. These AI-generated clips, which can be manipulated and create realistic video content, pose serious challenges to individuals and society. In this article, we delve into the world of deepfakes and guide you on how to spot these sophisticated fakes.

The Rise of AI in Video Manipulation

How to Detect AI-Generated Deepfake Videos?

Image Source: euractiv.com

Artificial intelligence (AI) has transformed video editing, making it possible to create deepfakes that are increasingly difficult to distinguish from real footage. Understanding the mechanics behind the role of AI in video manipulation is the first step in learning how to spot these frauds.

Understanding How Deepfakes Are Made

Deepfakes are created using advanced AI techniques, primarily deep learning and neural networks. By analyzing and copying patterns in existing video footage, these AI systems can generate new content where individuals appear to say or do something they never actually did.

Recognizing Deepfakes

Detecting deepfakes involves examining various aspects of a video, from visual cues to audio synchronization.

Visual Inconsistencies

One key to detecting deepfakes is to look for visual anomalies. This includes unusual skin tones, poor lighting, or mismatched shadows that don’t correspond to natural settings.

Audio Analysis

Analyzing the audio may provide clues. A mismatch between spoken words and lip movements is a red flag, indicating possible manipulation.

Analyzing Facial Expressions

Deepfakes often struggle to accurately replicate natural facial expressions. Any unnatural or exaggerated expressions may indicate that a video is a deepfake.

Blinking Patterns

A subtle but obvious signal is the blinking pattern. AI-generated videos often fail to capture natural blinking, resulting in either excessive or insufficient blinking.

Technological Tools to Detect Deepfakes

As deepfakes become more sophisticated, technology to detect them evolves.

AI Detection Algorithms

AI algorithms are being developed to fight fire with fire. These algorithms can analyze videos for anomalies at a much finer level than the human eye.

Software Solutions

Various software tools are available that use AI and machine learning to flag deepfakes. These tools analyze the video frame by frame and look for obvious signs of manipulation.

The Role of Media Literacy

In the fight against deepfakes, media literacy plays an important role. It is essential to be a critical viewer and question the authenticity of questionable content.

Educating the Public

Educational initiatives about deepfakes can empower the public to identify and question manipulated content, thereby reducing the impact of these videos.

Legal and Ethical Implications

Deepfakes raise significant legal and ethical concerns ranging from defamation to misinformation. It is important to understand these implications in the broader context of dealing with deepfake technology.

Conclusion

Detecting AI-generated deepfake videos requires a combination of visual and audio analysis, technical tools, and a critical approach to media consumption. As technology advances, it is important to remain informed and alert to identify these sophisticated counterfeits.

FAQs

  • What is the most common sign of a deepfake video? Look for visual inconsistencies and unnatural facial expressions, which are common indicators.

  • Can AI always detect deepfakes? Although AI is a powerful tool, it is not foolproof. Both deepfake and detection technologies continue to advance.

  • How can I improve my ability to spot deepfakes? Increase your media literacy and stay updated on the latest trends in deepfake technology.

  • Are there any legal measures to combat deepfakes? Legal measures are evolving, but vary by country and the specific use of deepfake technology.

  • Can deepfakes be harmless? While some deepfakes are created for entertainment, the potential for harm, especially in spreading misinformation, is significant.

Visa Initiative to Invest $100 Million in Generative AI Ventures

Visa Initiative to Invest $100 Million in Generative AI Ventures

In a move set to reshape the landscape of commerce and payments, Visa has declared its intention to invest $100 million in companies at the forefront of developing generative AI technologies. 

Visa Initiative to Invest $100 Million in Generative AI Ventures
Image Source: ffnews.com

The investment initiative will be executed through Visa Ventures, the global corporate investment arm with a history spanning 16 years. Visa, a trailblazer in AI applications for payments since 1993, is now focusing its attention on the burgeoning field of generative AI. This subset of artificial intelligence is characterized by its ability to generate text, images, or other content based on extensive datasets and textual prompts.

Jack Forestell, Chief Product and Strategy Officer of Visa emphasized the profound impact generative AI will have, stating, “While much of generative AI so far has been focused on tasks and content creation, this technology will soon not only reshape how we live and work, but it will also meaningfully change commerce in ways we need to understand.”

David Rolf, Head of Visa Ventures, underscored the transformative potential of generative AI, calling it “one of the most transformative technologies of our time.” He noted that Visa Ventures possesses flexibility in terms of the number and size of investments, expressing an interest in making a range of smaller investments in the early stages of the industry.

Rolf outlined the criteria for potential investments, specifying that Visa is seeking to support companies applying generative AI to address real challenges in commerce, payments, and fintech. This includes B2B processes around payments and infrastructure with the potential to significantly impact commerce. Rolf emphasized that Visa is open to engaging with companies at various levels of the technology stack, from data organization for generative AI to end-user experiences.

Also Read: Arc Raises $70 Million to Build the Tesla of Boats

Furthermore, responsible AI use aligning with Visa’s policies is a key consideration. Rolf stated, “One of our key considerations is how well these companies are practicing responsible use of AI, in line with Visa’s policies.”

This announcement follows Visa’s strategic move to appoint Marie-Elise Droga as the head of fintech, who noted that her team frequently collaborates with the Visa Ventures team. This collaboration serves as a scouting engine, identifying innovative startups that align with Visa’s vision for the future of commerce and payments. As Visa Ventures embarks on this $100 million investment journey, the landscape of generative AI technologies in commerce and payments is poised for significant transformation.

AI Startup Corti Raises $60 Million to Take on Microsoft in Health Care

AI Startup Corti Raises $60 Million to Take on Microsoft in Health Care

Copenhagen-based startup Corti has secured a substantial $60 million in funding to further advance its mission of revolutionizing the healthcare industry through AI technology.

AI Startup Corti Raises $60 Million to Take on Microsoft in Health Care
Image Source: mpost.io

The investment was led by Prosus Ventures and Atomico, with participation from previous backers Eurazeo, EIFO, and Chr. Augustinus Fabrikker. Although the company has remained tight-lipped about its valuation, its remarkable growth in customers and usage speaks volumes about its impact in the sector.

Just two years ago, Corti raised $27 million in a Series A round when it was assisting in 15 million consultations annually. Now, the company proudly serves 100 million patients each year, with its AI assistant being utilized a staggering 150,000 times daily. This translates to approximately 55 consultations daily across Europe and the United States. Corti boasts that its tools can enhance healthcare workers’ accuracy in outcome predictions by up to 40% while making administrative tasks 90% faster.

Corti’s innovative AI service is often described as an “AI co-pilot” for healthcare professionals, covering various areas of patient care. It assists in triaging during patient interactions, documents entire interactions, offers in-depth analysis to guide decision-making, provides second opinions, and generates real-time and post-meeting notes to identify areas for improvement and clinician training.

Corti’s success reflects the growing adoption of AI in healthcare, particularly after the COVID-19 pandemic highlighted the need for efficient and accurate medical support. The startup has attracted a diverse range of customers, including emergency services in Seattle, Boston, and Sweden, as well as numerous hospitals and medical services.

Unlike some competitors that rely on existing AI models, Corti has taken a unique approach by developing its own AI models and components. Notably, the company has not employed in-house medical experts to avoid introducing bias into its system. Instead, Corti engaged researchers to fine-tune its AI, resulting in a more responsive and functional platform.

Also Read: Solana Co-Founder: To Keep the Next Great American Founder in America, Congress Must Regulate Crypto. But First Lawmakers Should Learn How it Works.

While initial skepticism and concerns about job replacement were common when Corti first launched in 2018, the broader acceptance of AI, epitomized by technologies like ChatGPT, has paved the way for more productive conversations. Corti aims to make AI in healthcare a mundane and indispensable part of the industry, steering clear of the contentious debates about its role.

Despite differing opinions on the impact of AI in medicine, Corti’s funding round signals a commitment to improving healthcare efficiency and provider capabilities. With the support of visionary founders Andreas Cleve and Lars Maaløe, Corti is poised to redefine the patient and healthcare experience, ultimately enabling more personalized, preventative, and proactive medicine in a rapidly evolving industry.

Microsoft Says It Will Protect Customers from AI Copyright Lawsuits

Microsoft Says It Will Protect Customers from AI Copyright Lawsuits

In response to growing concerns about the misuse of artificial intelligence (AI) in generating harmful content, Microsoft has pledged to take significant steps to protect its customers from potential legal repercussions. 

Microsoft Says It Will Protect Customers from AI Copyright Lawsuits
Image Source: seattletimes.com

This commitment comes as Australia is set to implement a new code that mandates search engines like Google and Bing to prevent the dissemination of child sexual abuse material created by AI.

The new code, drafted at the Australian government’s request, seeks to ensure that search engines do not return results that include AI-generated child sexual abuse material. It also prohibits AI functions integrated into these search engines from producing synthetic versions of such harmful content, commonly referred to as deepfakes.

According to e-Safety Commissioner Julie Inman Grant, the rapid proliferation of generative AI has taken the world by surprise. She emphasized that the code signifies a crucial development in the regulatory and legal landscape surrounding internet platforms. This landscape is evolving in response to the explosion of products that automatically generate lifelike content, presenting new challenges and responsibilities for tech giants like Google and Microsoft.

Inman Grant highlighted that an earlier code proposed by Google and Microsoft did not address AI-generated content adequately. Consequently, she called upon these industry giants to reevaluate and improve the code to align with the emerging AI landscape.

“When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant explained.

Microsoft’s commitment to protecting its customers from AI-generated content reflects its dedication to responsible AI development and its recognition of the evolving legal and ethical concerns associated with AI. As a responsible tech leader, Microsoft is poised to play a pivotal role in shaping the industry’s response to these challenges.

This development comes on the heels of the Australian regulator registering safety codes for various other internet services, including social media, smartphone applications, and equipment providers. These codes are set to take effect in late 2023, marking a significant milestone in Australia’s efforts to ensure online safety and security.

Read More: Australia to Require AI-made Child Abuse Material to be Removed from Search Results

While this regulatory initiative is a positive step towards addressing the risks posed by AI-generated content, it also raises questions about privacy and the balance between security and personal freedoms. The regulator is still in the process of developing safety codes related to internet storage and private messaging services, an endeavor that has faced resistance from privacy advocates worldwide.

In conclusion, Microsoft’s commitment to protecting its users from AI-generated harmful content is a proactive response to evolving challenges in the digital landscape. As technology continues to advance, it is imperative for industry leaders to collaborate with regulators and stakeholders to establish responsible guidelines and practices for the responsible use of AI.

Australia to Require AI-made Child Abuse Material to be Removed from Search Results

Australia to Require AI-made Child Abuse Material to be Removed from Search Results

In a significant move to combat the proliferation of child sexual abuse material generated by AI, Australia’s internet regulator has announced that it will mandate search engines such as Google and Bing to take strict measures to prevent the dissemination of such harmful content. 

Australia to Require AI-made Child Abuse Material to be Removed from Search Results
Image Source: rnz.co.nz

This initiative comes as part of a new code drafted in collaboration with industry giants at the Australian government’s request, aimed at safeguarding the digital landscape from AI-generated child abuse material.

E-Safety Commissioner Julie Inman Grant revealed that the code, which is designed to protect the online community, imposes two crucial requirements on search engines. Firstly, it compels search engines to ensure that AI-generated child abuse material does not appear in search results. Secondly, it prohibits using generative AI to produce synthetic versions of such explicit content, commonly referred to as “deepfakes.”

“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant acknowledged. This rapid expansion of AI technology has necessitated a reevaluation of regulatory and legal frameworks governing internet platforms.

Inman Grant pointed out that an earlier code drafted by Google (owned by Alphabet) and Bing (owned by Microsoft) did not address the emerging issue of AI-generated content. Consequently, she urged these industry giants to revise their approach. “When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant emphasized.

A spokesperson for the Digital Industry Group Inc., an Australian advocacy organization representing Google and Microsoft, expressed satisfaction with the approval of the revised code. “We worked hard to reflect recent developments in relation to generative AI, codifying best practices for the industry and providing further community safeguards,” the spokesperson stated.

This development follows the regulator’s earlier initiatives to establish safety codes for various internet services, including social media platforms, smartphone applications, and equipment providers, which will take effect in late 2023. However, the regulator continues to face challenges in developing safety codes for internet storage and private messaging services, with privacy advocates worldwide voicing concerns.

Also Read: Apple Falls on Report That China Agencies Are Barring iPhone Use

As Australia takes a proactive stance in addressing the grave issue of AI-generated child abuse material, it serves as a noteworthy example of the evolving regulatory landscape surrounding internet platforms. The code aims to strike a balance between harnessing the potential of AI technology and safeguarding the well-being of online users, particularly the vulnerable, as the digital realm continues to evolve.

Google will add AI models from Meta, and Anthropic to its Cloud Platform

Google will add AI models from Meta, and Anthropic to its Cloud Platform

Google, a subsidiary of Alphabet Inc., is integrating more generative Artificial Intelligence into its services and promoting itself as an all-encompassing resource for cloud users looking to access the latest developments by integrating artificial intelligence technologies from firms like Meta Platforms Inc. as well as Anthropic into its cloud platform. The Llama 2 big language model from Meta as well as the Claude 2 chatbot from artificial intelligence startup Anthropic will be accessible to Google’s cloud clients, which they can then customise with company data for their services and applications.

Google will add AI models from Meta, and Anthropic to its Cloud Platform
Image Source: infoassasin.top

The decision, which was made public on Tuesday during the company’s Next ’23 conference in San Francisco, serves as part of the business’s attempt to establish its platform as one where users have a choice to select an artificial intelligence (AI) model that best suits their requirements, whether from the business itself or one of its collaborators. Google Cloud customers now have access to more than 100 potent AI models and tools, the firm claimed.

The business also said that its Duet AI tool would be made more widely accessible to Workspace customers this year, with public access to follow.

On applications like Google Docs, Sheets, & Slides, users may touch a generative artificial intelligence assistant that reacts to requests to help with content creation. According to Google, Duet AI, which was unveiled in May, can translate captions into 18 different languages, deliver conference summaries, as well as take notes during video sessions.

Users may send the tool for participation in meetings on the user’s behalf, deliver messages, and provide an event report using a new feature dubbed “attend for me.”Google also announced new collaborations with businesses like GE Appliances as well as Fox Sports that would enable consumers to benefit from AI in ways like creating personalized recipes or watching a replay of a sporting event via Fox’s broadcast library.

Read More: Tech Industry Dodges California Social Media Addiction Bill

“We are in an entirely new era of digital transformation, fueled by gen AI,” Thomas Kurian, chief executive officer of Google Cloud, said in a blog post timed to the announcements. “This technology is already improving how businesses operate and how humans interact with one another.”

hindustantimes.com