Your Tech Story

News

Adobe Will Charge Less Than OpenAI for Image Generation Tool

Adobe Will Charge Less Than OpenAI for Image Generation Tool

In a bold move to capture the rapidly growing market of AI-powered image generation, Adobe Inc. has unveiled its latest offering, Firefly, which comes at a price that undercuts OpenAI’s Dall-E. 

Adobe Will Charge Less Than OpenAI for Image Generation Tool
Image Source: finance.yahoo.com

This development is poised to reshape the landscape of creative AI tools and make them more accessible to a broader user base. Adobe, renowned for its suite of creative software including Photoshop and Illustrator, has introduced Firefly as an integrated generative artificial intelligence imaging tool. This innovative tool will be available within Adobe’s creative software, as well as on a standalone website, offering users a seamless experience.

One of the most compelling aspects of Adobe’s Firefly is its pricing strategy. Subscribers to Adobe’s creative software packages will enjoy a set number of image generations as part of their existing plans, with the highest-tier subscription offering an impressive 3,000 image generations per month. This inclusion of AI-powered image generation within existing packages is expected to attract professionals and enthusiasts alike.

What sets Adobe’s Firefly apart is its pricing model for additional image generations. Users who wish to produce more images beyond their subscription allocation will be charged approximately 5 cents per creation. This pricing is notably lower than OpenAI’s Dall-E, which has been a popular choice for AI-generated images but charges around 13 cents per credit on the web. While Adobe’s pricing may not be the absolute cheapest, it strikes a balance between affordability and quality.

Adobe’s move into the AI-driven creative space is strategic. By integrating AI into its widely-used products, the company aims to gain a competitive edge over startups and other software providers. Firefly is designed to enhance the creative process for artists, designers, and content creators, enabling them to generate stunning visuals with ease.

Furthermore, Adobe is emphasizing the “commercially safe” nature of Firefly. This means that the company will support its customers in legal matters, particularly if they encounter copyright claims related to images generated using the tool. This commitment to customer protection is expected to reassure businesses and individuals concerned about potential legal issues when using AI-generated content.

Also Read: Drone Startup Shield AI Valued at $2.5 Billion in New Funding Round

In a bid to make Firefly accessible to a wider audience, Adobe is offering a free tier that allows users to produce up to 25 images per month. This move aligns with Adobe’s strategy of democratizing creative tools and providing opportunities for users to explore the capabilities of AI-generated content.

As Adobe steps into the AI image generation arena with Firefly, it is not only challenging competitors like OpenAI but also reshaping the creative landscape. With its cost-effective pricing, seamless integration, and commitment to customer support, Adobe is poised to make AI-powered creativity more accessible and user-friendly for professionals and enthusiasts alike. Firefly represents a significant leap forward in the world of AI-driven content creation, promising a brighter future for digital artists and designers.

Alibaba CEO Elevates AI to Key Priority in Group Revamp Plan

Alibaba CEO Elevates AI to Key Priority in Group Revamp Plan

Alibaba Group Holding Ltd, one of China’s tech giants, is embarking on a strategic transformation that places artificial intelligence (AI) and user experience at the forefront of its priorities. 

Alibaba CEO Elevates AI to Key Priority in Group Revamp Plan
Image Source: techwireasia.com

This bold move comes as the company faces intensified competition and economic challenges in a rapidly evolving market. The newly appointed CEO, Eddie Wu, articulated his vision for the company in a memo to employees, marking a significant shift in Alibaba’s approach. Wu emphasized the need to pivot towards an “AI-first” strategy while remaining mindful of the hundreds of millions of users who contributed to the company’s immense success.

“We will recalibrate our operations around these two core strategies and reshape our business priorities,” Wu stated in his memo. This renewed focus on AI is in response to mounting competition from emerging rivals like ByteDance Ltd in the realm of social media and significant AI investments made by companies like Baidu Inc. Alibaba aims to reinforce its investments in AI-driven tech businesses, internet platforms, and its global commerce network, aligning with the broader trend of Chinese tech companies prioritizing AI.

Alibaba’s strategic shift is taking place against a backdrop of fierce competition and domestic economic challenges. The company is slowly recovering from a two-year-long tech crackdown imposed by Beijing, and the unexpected departure of former CEO Daniel Zhang, who had recently accepted the role of steering the key Cloud Intelligence Group, signals a changing landscape within the organization.

Analysts suggest that the departure of Zhang may lead to greater influence for the new leadership team, composed of Eddie Wu and group chairman Joe Tsai. Wu and Tsai, both long-time associates of co-founder Jack Ma, are taking the reins at a pivotal moment. Alibaba must not only defend its top position against competitors like JD.com Inc. but also navigate a complex plan to split into six major business units.

Among these divisions, the cloud unit is seen as a significant potential growth driver, particularly in the AI infrastructure and services sector. Alibaba is actively seeking fresh funds, with plans for a Hong Kong initial public offering of its Freshippo grocery chain temporarily on hold due to valuation concerns.

Also Read: G-20 Broadens Debate on AI Risks and Mulls Global Oversight

Alibaba’s entry into the global AI race aligns with the broader importance of AI in tech companies and national strategic objectives. While the company did not secure initial regulatory approvals for offering generative AI services in China, it has made notable strides with the integration of AI models like ChatGPT into its meeting and messaging apps.

Jeffrey Towson, a partner at TechMoat Consulting, emphasized the significance of the cloud division, stating, “Who is going to run Alibaba Cloud is now the single most important growth question for Alibaba.” The company remains committed to independently spinning off the cloud unit, which seeks to raise substantial funding, potentially involving Chinese state enterprises.

G-20 Broadens Debate on AI Risks and Mulls Global Oversight

G-20 Broadens Debate on AI Risks and Mulls Global Oversight

The G20 summit, hosted by Indian Prime Minister Narendra Modi, provided a platform for world leaders to engage in crucial discussions regarding the future of artificial intelligence (AI). 

G-20 Broadens Debate on AI Risks and Mulls Global Oversight
Image Source: headtopics.com

The primary focus of these discussions was to harness the economic potential of AI while safeguarding human rights, and many leaders voiced the need for global oversight of this rapidly evolving technology.

European Commission President Ursula von der Leyen proposed the establishment of an oversight body akin to the Intergovernmental Panel on Climate Change, emphasizing the necessity of “human-centric” AI governance. This sentiment was echoed by Modi, who emphasized the need to create a framework that ensures AI’s development aligns with human values and rights.

Notably, even AI innovators themselves are advocating for political leaders to play a regulatory role in AI development. The acknowledgment of this necessity underlines the importance of addressing the ethical and societal implications of AI on a global scale.

German Finance Minister Christian Lindner expressed the bloc’s commitment to addressing AI ethics through common rules. He noted that this process had already begun, with experts laying the foundation for deeper discussions in the coming year. The commitment to ethical AI governance is a significant step toward ensuring responsible AI development and deployment.

In their final communique, G20 leaders affirmed their dedication to “responsible AI development, deployment, and use.” This commitment encompasses the protection of fundamental rights, transparency, privacy, and data security while avoiding potential pitfalls. Additionally, the leaders endorsed a “pro-innovation regulatory/governance approach” designed to maximize the benefits of AI while carefully considering its associated risks.

This G20 initiative aligns with the previous agreement reached by leaders of the Group of Seven (G7) advanced economies. In May, the G7 leaders expressed concerns about the potential risks posed by AI technologies and initiated the “Hiroshima Process.” This process involves cabinet-level discussions aimed at addressing AI challenges, with results expected to be presented by year’s end.

AI governance is poised to remain a central focus in international forums. Italy, set to preside over the G7 in 2024, is committed to advancing AI governance. Italian Prime Minister Giorgia Meloni and Prime Minister Modi discussed coordination efforts during the G20 summit, signaling their commitment to a comprehensive and responsible approach to AI.

Also Read: SK Hynix’s $24 Billion Rally Unraveling on US-China Tech War

Furthermore, the United Kingdom is preparing to host the inaugural global summit on Artificial Intelligence on November 1-2. Prime Minister Rishi Sunak seeks to position the UK as a leader in AI technology, emphasizing its potential for both positive contributions, such as expediting medical diagnoses and reducing emissions and the need to mitigate potential misuse, including election interference and disinformation campaigns.

Key figures, including US President Joe Biden and prominent tech leaders, are expected to participate in the UK summit. This international collaboration underscores the global commitment to harnessing AI’s potential while ensuring ethical and responsible governance.

SK Hynix’s $24 Billion Rally Unraveling on US-China Tech War

SK Hynix’s $24 Billion Rally Unraveling on US-China Tech War

For SK Hynix Inc., the South Korean chipmaker that saw its value rise by an astonishing 24 billion dollars in 2023 has been an exciting journey. However, the rising technological conflict between the United States with China poses a threat to this spectacular upsurge. Based on a new Bloomberg story, tensions brought on by this geopolitical dispute are jeopardizing SK Hynix’s chances for the future.

SK Hynix’s $24 Billion Rally Unraveling on US-China Tech War
Image Source: Bloomberg.com

As a significant vendor for tech giants Apple Inc. as well as Nvidia Corp., SK Hynix had benefited from the surge in artificial intelligence that had been taking place this year. By the finish of August, the price of the business’s stock had increased by over 60 percent thanks to the favorable market circumstances.

Bullish investors didn’t let quarterly declines or warnings about American sanctions against China discourage them, making SK Hynix’s stock rank as among the most pricey alongside Asian semiconductor companies.

Technology has turned into the main area of dispute in the present trade disputes involving the United States and China. Accessibility to American technological advances has been restricted by the U.S. government, with worries about national security serving as the main justification.

For businesses such as SK Hynix that significantly depend on cross-border commerce as well as cooperation between the two nations, this scenario has led to a great deal of anxiety. Businesses encounter difficulties in sustaining stable operations and guaranteeing long-term development as these geopolitical conflicts get more and more entwined with global supply networks.

Bloomberg’s story cites industry analysts who claim that SK Hynix is particularly susceptible since it depends heavily on Chinese clients for a substantial portion of its sales. The possible interruption brought on by tense ties between the United States and China may make it more difficult for SK Hynix to satisfy requirements or possibly force it to lose access to important marketplaces.

Read More: Microsoft Says It Will Protect Customers from AI Copyright Lawsuits

It is important to note that SK Hynix is not the only company in this situation. Samsung Electronics Co. as well as Taiwan Semiconductor Manufacturing Co. (TSMC), two other significant semiconductor sector participants, are currently dealing with the fallout from the US-China technological conflict.

“There will probably be no actions against Hynix, but the US government might probe the distribution channels,” said Mr Tom Kang, an analyst at Counterpoint Technology Market Research.

straitstimes.com
Microsoft Says It Will Protect Customers from AI Copyright Lawsuits

Microsoft Says It Will Protect Customers from AI Copyright Lawsuits

In response to growing concerns about the misuse of artificial intelligence (AI) in generating harmful content, Microsoft has pledged to take significant steps to protect its customers from potential legal repercussions. 

Microsoft Says It Will Protect Customers from AI Copyright Lawsuits
Image Source: seattletimes.com

This commitment comes as Australia is set to implement a new code that mandates search engines like Google and Bing to prevent the dissemination of child sexual abuse material created by AI.

The new code, drafted at the Australian government’s request, seeks to ensure that search engines do not return results that include AI-generated child sexual abuse material. It also prohibits AI functions integrated into these search engines from producing synthetic versions of such harmful content, commonly referred to as deepfakes.

According to e-Safety Commissioner Julie Inman Grant, the rapid proliferation of generative AI has taken the world by surprise. She emphasized that the code signifies a crucial development in the regulatory and legal landscape surrounding internet platforms. This landscape is evolving in response to the explosion of products that automatically generate lifelike content, presenting new challenges and responsibilities for tech giants like Google and Microsoft.

Inman Grant highlighted that an earlier code proposed by Google and Microsoft did not address AI-generated content adequately. Consequently, she called upon these industry giants to reevaluate and improve the code to align with the emerging AI landscape.

“When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant explained.

Microsoft’s commitment to protecting its customers from AI-generated content reflects its dedication to responsible AI development and its recognition of the evolving legal and ethical concerns associated with AI. As a responsible tech leader, Microsoft is poised to play a pivotal role in shaping the industry’s response to these challenges.

This development comes on the heels of the Australian regulator registering safety codes for various other internet services, including social media, smartphone applications, and equipment providers. These codes are set to take effect in late 2023, marking a significant milestone in Australia’s efforts to ensure online safety and security.

Read More: Australia to Require AI-made Child Abuse Material to be Removed from Search Results

While this regulatory initiative is a positive step towards addressing the risks posed by AI-generated content, it also raises questions about privacy and the balance between security and personal freedoms. The regulator is still in the process of developing safety codes related to internet storage and private messaging services, an endeavor that has faced resistance from privacy advocates worldwide.

In conclusion, Microsoft’s commitment to protecting its users from AI-generated harmful content is a proactive response to evolving challenges in the digital landscape. As technology continues to advance, it is imperative for industry leaders to collaborate with regulators and stakeholders to establish responsible guidelines and practices for the responsible use of AI.

Australia to Require AI-made Child Abuse Material to be Removed from Search Results

Australia to Require AI-made Child Abuse Material to be Removed from Search Results

In a significant move to combat the proliferation of child sexual abuse material generated by AI, Australia’s internet regulator has announced that it will mandate search engines such as Google and Bing to take strict measures to prevent the dissemination of such harmful content. 

Australia to Require AI-made Child Abuse Material to be Removed from Search Results
Image Source: rnz.co.nz

This initiative comes as part of a new code drafted in collaboration with industry giants at the Australian government’s request, aimed at safeguarding the digital landscape from AI-generated child abuse material.

E-Safety Commissioner Julie Inman Grant revealed that the code, which is designed to protect the online community, imposes two crucial requirements on search engines. Firstly, it compels search engines to ensure that AI-generated child abuse material does not appear in search results. Secondly, it prohibits using generative AI to produce synthetic versions of such explicit content, commonly referred to as “deepfakes.”

“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant acknowledged. This rapid expansion of AI technology has necessitated a reevaluation of regulatory and legal frameworks governing internet platforms.

Inman Grant pointed out that an earlier code drafted by Google (owned by Alphabet) and Bing (owned by Microsoft) did not address the emerging issue of AI-generated content. Consequently, she urged these industry giants to revise their approach. “When the biggest players in the industry announced they would integrate generative AI into their search functions, we had a draft code that was clearly no longer fit for purpose. We asked the industry to have another go,” Inman Grant emphasized.

A spokesperson for the Digital Industry Group Inc., an Australian advocacy organization representing Google and Microsoft, expressed satisfaction with the approval of the revised code. “We worked hard to reflect recent developments in relation to generative AI, codifying best practices for the industry and providing further community safeguards,” the spokesperson stated.

This development follows the regulator’s earlier initiatives to establish safety codes for various internet services, including social media platforms, smartphone applications, and equipment providers, which will take effect in late 2023. However, the regulator continues to face challenges in developing safety codes for internet storage and private messaging services, with privacy advocates worldwide voicing concerns.

Also Read: Apple Falls on Report That China Agencies Are Barring iPhone Use

As Australia takes a proactive stance in addressing the grave issue of AI-generated child abuse material, it serves as a noteworthy example of the evolving regulatory landscape surrounding internet platforms. The code aims to strike a balance between harnessing the potential of AI technology and safeguarding the well-being of online users, particularly the vulnerable, as the digital realm continues to evolve.