Your Tech Story

Search Results for: open-source

Solana Co-Founder: To Keep the Next Great American Founder in America, Congress Must Regulate Crypto

Solana Co-Founder: To Keep the Next Great American Founder in America, Congress Must Regulate Crypto. But First Lawmakers Should Learn How it Works.

Anatoly Yakovenko, the co-founder of Solana and CEO of Solana Labs, recently shared his perspective on the importance of Congress regulating cryptocurrency to foster innovation and retain talented entrepreneurs in the United States. 

Solana Co-Founder: To Keep the Next Great American Founder in America, Congress Must Regulate Crypto
Image Source: bitnation.co

Born under Soviet rule in modern-day Ukraine, Yakovenko moved to America at the age of 11 and has since been a champion for open and accessible technology. Yakovenko’s journey from a young immigrant to a successful entrepreneur mirrors the American dream, but he worries that regulatory hurdles are driving away the next generation of innovators in the blockchain space. In his view, Congress should take proactive steps to create a regulatory framework that both protects consumers and encourages entrepreneurship.

The blockchain revolution has spawned thousands of entrepreneurs with ambitious projects, many of whom are challenging corporate giants in industries like wireless networks, ridesharing, food delivery, and social media. However, building a blockchain company in a compliant manner is a complex and expensive process, dissuading young founders from pursuing their dreams in the U.S.

Yakovenko highlights the alarming decline in the number of open-source blockchain developers in the U.S., dropping from 42% in 2018 to 29% in 2022, emphasizing the urgent need for regulatory clarity to prevent this talent drain.

While acknowledging the need to combat scams and protect consumers, Yakovenko argues that the entire blockchain industry should not be punished for the actions of a few bad actors. Instead, he calls for a regulatory framework that fosters innovation while maintaining American values.

In July, two Congressional committees advanced legislation aimed at creating regulatory frameworks for digital assets and stablecoins, a bipartisan effort that Yakovenko applauds. While these bills may not be perfect, he urges Congress to move forward with them and continue refining the regulatory landscape.

Beyond legislation, Yakovenko emphasizes the importance of government investment in blockchain research and development, citing historical examples of technologies like GPS and the internet that were initially incubated by the U.S. government. He urges policymakers to experiment with blockchain technology and explore ways to harness its potential for public benefit.

Also Read: Intel CEO Says the Chipmaker’s Technology Is Central to AI Boom

Yakovenko concludes by inviting an open conversation between blockchain entrepreneurs and policymakers, advocating for collaboration to shape a regulatory framework that not only protects consumers but also encourages innovation and keeps talented founders building in America.

As the blockchain industry continues to evolve, Yakovenko’s insights underscore the critical role that Congress and government institutions must play in ensuring that the United States remains a hub for technological innovation and entrepreneurship in the digital age.

IBM Commits To Train 2 Million in Artificial Intelligence in Three Years

IBM Commits To Train 2 Million in Artificial Intelligence in Three Years, With a Focus on Underrepresented Communities

IBM today declared an initiative to teach two million students in artificial intelligence by the end of the year 2026, having a concentration on neglected areas, to help reduce the worldwide artificial intelligence (AI) capabilities imbalance. To accomplish this objective on a worldwide basis, IBM is offering new generative AI courses via IBM SkillsBuild, extending AI education partnerships with institutions across the world, and working with collaborators to provide AI training to adults worldwide. This will build upon IBM’s current initiatives and platforms for career development to provide improved availability for technical positions and in-demand AI education.

IBM Commits To Train 2 Million in Artificial Intelligence in Three Years
Image Source: ffnews.com

The latest global research by the IBM Institute of Business Value found that CEOs believe that the implementation of artificial intelligence and automation will necessitate the reskilling of forty percent of their employees throughout the next three years, primarily those working in entry-level roles. This serves as another evidence that generative AI is generating a need for new positions and expertise.

“AI skills will be essential to tomorrow’s workforce,” said Justina Nixon-Saintil, IBM Vice President & Chief Impact Officer. “That’s why we are investing in AI training, with a commitment to reach two million learners in three years, and expanding IBM SkillsBuild to collaborate with universities and nonprofits on new generative AI education for learners all over the world.”

finance.yahoo.com

IBM is working with institutions all around the world to develop their AI capabilities by utilizing IBM’s network of professionals. University professors will have a chance to attend IBM-led training, which includes certifications after completion of immersive skilling experiences and lectures. Additionally, IBM will offer course materials, including self-guided artificial intelligence (AI) learning paths, for instructors to utilize in the classroom. IBM will provide students with adaptable and flexible resources, including free online courses in generative AI as well as Red Hat open-source tools, along with academic instruction from academics.

Learners worldwide may have access to AI education through IBM SkillsBuild, which IBM specialists established to deliver the most recent cutting-edge technological breakthroughs.

Also Read: Google Tweaks Ad Auctions to Hit Revenue Targets, Executive Says

IBM SkillsBuild already provides free training in chatbots, the basics of AI, and important subjects like AI ethics. Coursework and improved features are included in the latest generative AI plan.

Along with workshops, expert interactions with IBM coaches along mentors, learning through projects, availability of IBM software, specialized assistance from partners during the learning procedure, and connections to employment prospects, the improved partner edition of IBM SkillsBuild may also contain these features.

Meta Launches AI Coding Software to Compete With OpenAI

Meta Launches AI Coding Software to Compete With OpenAI

In a bold move to solidify its position as a strong contender in the AI landscape, Meta Platforms Inc. has unveiled its latest innovation – an artificial intelligence coding tool named Code Llama. The announcement comes as part of Meta’s ongoing efforts to compete head-on with giants like OpenAI, backed by Microsoft Corp., and Alphabet Inc.’s Google.

Meta Launches AI Coding Software to Compete With OpenAI
Image Source: dig.watch

Code Llama, introduced just last Thursday, is a revolutionary coding assistant powered by generative AI. This innovative tool is set to transform the way developers write code by providing intelligent suggestions and enhancements. Leveraging the capabilities of AI, Code Llama aims to significantly boost developer efficiency, ultimately resulting in faster and more streamlined software development processes.

One of the most remarkable aspects of Code Llama is that Meta has chosen to make the underlying generative AI model open source. This strategic move allows other organizations to harness the power of Code Llama’s technology for their own purposes. As highlighted in a recent blog post by Meta, companies now have the opportunity to create their own tailored coding tools using this cutting-edge AI, reducing their dependence on existing solutions from competitors.

In recent months, Meta has been on a mission to democratize AI by releasing open-source versions of AI solutions that directly rival those offered by its competitors. This trend started with the launch of a commercial variant of their extensive language model, similar to the technology that powers OpenAI’s renowned ChatGPT. By giving companies access to their AI chatbot technology, Meta has paved the way for cost-effective chatbot development, sidelining expenses tied to software from tech giants like OpenAI, Google, and Microsoft.

Code Llama is set to continue this trend, simplifying the creation of AI coding tools for businesses. This groundbreaking tool aims to replace the need for purchasing similar products from competitors such as Microsoft’s GitHub Copilot, which relies on OpenAI’s technology. While Code Llama will be accessible for most users at no cost, Meta has indicated that certain large enterprises will have the option to access enhanced features through a paid subscription model.

Also Read: How will the European Union’s Digital Services Act impact Google, Facebook, TikTok, and other major tech companies

The development of generative AI technologies has become a focal point for Meta, evident from the establishment of a dedicated product group solely focused on advancing generative AI capabilities. Mark Zuckerberg, CEO of Meta, has consistently emphasized the company’s vision of seamlessly integrating AI throughout its entire product spectrum. Internally, Meta is actively encouraging the adoption of its AI-powered chatbot, Metamate, among its employees. Moreover, there’s anticipation surrounding the imminent launch of a public-access chatbot in the coming weeks.

In conclusion, Meta’s introduction of Code Llama marks a significant milestone in the company’s pursuit to establish itself as a prominent player in the AI landscape. With the power of generative AI and the open-source approach, Code Llama not only empowers developers but also signals a shift towards more accessible and democratized AI tools across the industry.

Llama 2

What is Llama 2: Meta’s AI explained

Earlier this week, Meta, the parent organization of Facebook, unleashed Llama 2, its second-generation open-source large language model (LLM).

Unlike its predecessor Llama 1, which was closely guarded and accessible only upon request, Llama 2 is now freely available for anyone to use, explore, and create cutting-edge AI-powered applications.

Llama 2
Image Source: dexerto.com

Powered by a massive amount of data, Llama 2 boasts significant improvements over its predecessor. Meta proudly claims that Llama 2 is trained on 40% more data and possesses double the context length, resulting in more accurate and powerful language generation capabilities.

This advancement enables the LLM to produce human-like responses, making it an ideal tool for building chatbots like ChatGPT and Google Bard.

Moreover, Meta has collaborated with Microsoft to co-develop Llama 2. This partnership ensures that the applications built with Llama 2 will soon be available not only on Windows PCs and laptops but also on smartphones powered by Qualcomm’s Snapdragon SoCs. The availability of this across multiple platforms opens up a world of possibilities for developers and end-users alike.

Llama 2 comes in three parameter sizes – 7B, 13B, and 70B – each catering to different use cases. However, Meta has decided to withhold the 34B parameter size from public release.

Also Read: What is Worldcoin? The Crypto Project Launched by OpenAI CEO

While the model is open for research and commercial use, Big Tech companies with over 700 million users must seek permission from Meta before employing this, possibly to prevent any undue concentration of AI power.

The potential use cases of this LLM are vast and diverse. It can be leveraged to create consumer and enterprise chatbots, assist in language generation, fuel research, and power various AI-driven tools. However, despite its open-source nature, Meta places soft limits on enterprise users to ensure responsible and ethical AI development.

While Llama 2 exhibits impressive language generation abilities, it’s crucial to remember that chatbots built on these models may have certain limitations. As with any AI language model, the accuracy of responses depends on the questions asked and the data on which the model was trained. Tricky queries and coding-related questions tend to yield better results, while some basic inquiries may lead to vague or incorrect answers.

It’s important for users to exercise caution when interacting with AI-powered chatbots. Companies employing these chatbots may utilize user-provided data to further train the models, and there have been instances of malicious actors exploiting chatbots to steal sensitive information. As a precaution, refrain from sharing personally identifiable data with such chatbots.

Although this is not yet available as an end product like ChatGPT, it is accessible to those with expertise in working with LLMs. All three models of Llama 2 can be downloaded from Meta’s website, opening up exciting opportunities for researchers and developers keen on experimenting with this cutting-edge AI technology.

In conclusion, it represents a significant milestone in the world of AI language models. With its openness, power, and versatility, Llama 2 has the potential to revolutionize chatbots and language generation applications across various domains.

As more developers and researchers explore the possibilities of Llama 2, we can expect to witness exciting innovations that push the boundaries of AI and redefine human-computer interactions.

WormGPT

What is WormGPT? The new AI behind the cyberattacks

In recent news, a dangerous AI tool named WormGPT has been gaining popularity on cybercrime forums within the dark web. Marketed as a “sophisticated AI model,” WormGPT is specifically designed to generate human-like text for hacking campaigns, enabling cybercriminals to execute attacks on an unprecedented scale.

According to cybersecurity expert Daniel Kelley, who shared his findings on the platform Slashnext, WormGPT was trained on a diverse range of data sources, with a particular emphasis on malware-related data. This training allows the AI tool to create text that can be utilized for various malicious activities.

WormGPT
Image Source: dataconomy.com

The implications of WormGPT’s emergence are concerning for everyday internet users and businesses alike. One of the key issues lies in the speed and volume of scams that a language model like this can generate simultaneously.

The rapid text generation capability of AI models, combined with WormGPT’s malicious intent, poses a significant threat. Cyberattacks such as phishing emails can now be replicated easily, even by those with minimal cybercriminal skills.

Adding to the danger is the promotion of “jailbreaks” on ChatGPT, a similar AI language model by OpenAI, which essentially allows for the manipulation of prompts and inputs to create harmful content or reveal sensitive information. The consequences of such manipulation can be severe, leading to potential data breaches, inappropriate content dissemination, and the development of harmful code.

Also Read: The Future of AI: How Artificial Intelligence Will Change Future

Kelley pointed out that generative AI, like WormGPT, can produce emails with impeccable grammar, making them appear legitimate and decreasing the chances of being flagged as suspicious. This democratizes the execution of sophisticated Business Email Compromise (BEC) attacks, providing access to powerful hacking tools for a broader spectrum of cybercriminals, including those with limited technical expertise.

While companies such as OpenAI ChatGPT and Google Bard are actively working to combat the misuse of large language models (LLMs), there are concerns about the capabilities of these countermeasures.

A recent report by Check Point highlighted that Bard’s anti-abuse restrictors in the realm of cybersecurity are significantly lower compared to ChatGPT, making it easier to generate malicious content using Bard’s capabilities.

The introduction of WormGPT to the dark web follows a disconcerting trend. Researchers from Mithril Security recently revealed their successful modification of an existing open-source AI model named PoisonGPT, aimed at spreading disinformation. The potential consequences of such AI technology are still largely unknown.

As AI has already demonstrated the ability to generate and spread disinformation, manipulate public opinion, and even influence political campaigns, the emergence of bootleg AI models like WormGPT only exacerbates the risks faced by unsuspecting users.

In conclusion, the rise of WormGPT on the dark web signifies a troubling development in the world of cybercrime. The ease with which this AI tool can generate realistic and malicious content poses a significant threat to cybersecurity.

As cyber threat actors find new ways to exploit AI technology, it becomes crucial for AI developers and cybersecurity experts to remain vigilant and take proactive measures to safeguard against potential abuses of AI language models.

Additionally, internet users and organizations must stay informed about these developments and implement robust security measures to protect themselves from the ever-evolving landscape of cyber threats.

crypto

What makes a crypto asset a security in the US?

The recent legal actions taken by the U.S. Securities and Exchange Commission (SEC) against prominent crypto platforms Coinbase and Binance have sparked a significant debate regarding the classification of digital tokens as securities.

Accusing these platforms of operating illegally and evading disclosure requirements, the SEC’s lawsuits seek to determine whether certain cryptocurrencies should have been registered as securities. As the United States leads the regulatory crackdown on cryptocurrencies, this case becomes a crucial test of the SEC’s jurisdiction over the industry.

Crypto
Image Source: zawya.com

The SEC alleges that Coinbase allowed users to trade at least 13 crypto assets that should have been registered as securities, including tokens like Solana, Cardano, and Polygon.

However, Coinbase and other industry players vehemently deny listing any securities, contending that most cryptocurrencies, which operate on blockchain networks, do not meet the definition of securities under U.S. law.

Industry participants argue that the SEC’s determination of whether digital assets are securities has been vague and inconsistent, making it challenging for market players to navigate regulatory requirements.

Also Read: How to install the macOS Sonoma public beta?

To determine whether crypto assets should be classified as securities, the SEC relies on the Howey Test. This test stems from a landmark 1946 U.S. Supreme Court case involving investors in Florida orange groves.

It established that an investment involving money in a common enterprise with profits derived solely from the efforts of others falls under the definition of an investment contract or security. This precedent empowers the SEC to intervene when investments resemble this type of arrangement, indicating the potential classification of certain cryptocurrencies as securities.

Previous Rulings and Settlements (approx. 120 words): In cases that have reached court, judges have generally sided with the SEC, determining that specific crypto assets are securities. These decisions have been based on developers’ statements linking the value of digital assets to the efforts of others, thereby establishing that investor profits depend on external factors.

Furthermore, courts have concluded that investors participate in a common enterprise when their funds are pooled by the token issuer and used for system development. Many crypto-related cases brought by the SEC have ended in settlements, with companies paying fines and committing to comply with U.S. securities laws, sometimes resulting in companies leaving the U.S. market or discontinuing their crypto projects.

The outcome of the SEC’s case against Ripple Labs over the cryptocurrency XRP holds substantial implications for the classification of crypto assets as securities. Ripple argues that there was no common enterprise involved since the associated blockchain was fully operational before XRP was ever sold.

In contrast, Bitcoin is generally not considered a security due to its anonymous and open-source origins, where investor profits are not dependent on the efforts of developers or managers. The Ripple case’s resolution will likely provide further clarity on the SEC’s stance regarding securities classification.

The ongoing legal battle between the SEC and major crypto platforms like Coinbase and Binance centers around the classification of cryptocurrencies as securities. As the industry calls for clearer regulations and guidance, the outcomes of current cases will significantly shape the regulatory landscape for the U.S. crypto industry.