google-site-verification=cXrcMGa94PjI5BEhkIFIyc9eZiIwZzNJc4mTXSXtGRM Biden signs bill forcing TikTok to part with its Chinese parent company or face a national ban - 360WISE MEDIA
Connect with us

Technology

Biden signs bill forcing TikTok to part with its Chinese parent company or face a national ban

Published

on

Biden, TikTok, 90 day extension,


TikTok condemns President Joe Biden’s “unconstitutional” recent law that forces the social media app to part ways with Chinese parent company ByteDance.

On Wednesday, April 24, Biden signed several controversial legal solutions, including sending billions in aid to Ukraine, Israel and Taiwan, and persuading TikTok’s Chinese parent company, ByteDance, to sell the social media platform or face a nationwide ban within the United States, CNBC reports.

“The way to my desk was difficult. It should have been easier and it should have happened sooner,” Biden said. “But in the end, we did what America always does: We got to the point.”

The law gives ByteDance nine months, or a yr sell TikTok if Biden asks for a 90-day extensionor face national ban. The popular social media application issued a statement in response to the law, accusing it of being unconstitutional.

“This unconstitutional law is a ban on TikTok and we will challenge it in court. We believe that the facts and the law are clearly on our side and that we will ultimately prevail,” the company wrote.

“The fact is that we have invested billions of dollars to protect data in the US and to protect our platform from external influence and manipulation. This ban would devastate seven million businesses and silence 170 million Americans. As we continue to challenge this unconstitutional ban, we will continue to invest and innovate to ensure that TikTok remains a space where Americans of all backgrounds can safely come to share their experiences, find joy and be inspired.”

TikTok CEO Shou Zi Chew uploaded a video message through which he vowed to fight back from the “disappointing moment.”

“Make no mistake. This is a ban. A ban on TikTok and a ban on you and your voice,” Chew said. “Politicians may claim otherwise, but don’t be fooled. Many of the people who sponsored this bill admit their ultimate goal is to ban Tiktok “.

Pointing out the irony of the brand new law, Chew criticized the hypocrisy of the US government working to ban an app that represents “the same American values ​​that make the United States a beacon of freedom.”

In addition to a possible ban on TikTok, the brand new law provides $60 billion in aid for Ukraine, $26 billion for Israel and $8 billion for security in Taiwan and the Indo-Pacific region. The Biden administration also notes plans to use TikTok to reach voters as part of its 2024 re-election campaign.


This article was originally published on : www.blackenterprise.com
Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Why RAG won’t solve the AI ​​generative hallucination problem

Published

on

By

Hallucinations – essentially the lies that generative artificial intelligence models tell – pose an enormous problem for firms seeking to integrate the technology into their operations.

Because models haven’t any real intelligence and easily predict words, images, speech, music, and other data in keeping with a non-public schema, they often get it mistaken. Very bad. In a recent article in The Wall Street Journal, a source cites a case through which Microsoft’s generative AI invented meeting participants and suggested that conference calls covered topics that weren’t actually discussed during the call.

As I wrote a while ago, hallucinations could be an unsolvable problem in modern transformer-based model architectures. However, many generative AI vendors suggest eliminating them roughly through a technical approach called search augmented generation (RAG).

Here’s how one supplier, Squirro, he throws it: :

At the core of the offering is the concept of Recovery Augmented LLM or Recovery Augmented Generation (RAG) built into the solution… (our Generative Artificial Intelligence) is exclusive in its promise of zero hallucinations. Each piece of knowledge it generates is traceable to its source, ensuring credibility.

Here it’s similar tone from SiftHub:

Using RAG technology and fine-tuned large language models and industry knowledge training, SiftHub enables firms to generate personalized responses without hallucinations. This guarantees greater transparency and reduced risk, and instills absolute confidence in using AI for all of your needs.

RAG was pioneered by data scientist Patrick Lewis, a researcher at Meta and University College London and lead writer of the 2020 report paper who coined this term. When applied to a model, RAG finds documents which may be relevant to a given query—for instance, the Wikipedia page for the Super Bowl—using keyword searches, after which asks the model to generate a solution in this extra context.

“When you interact with a generative AI model like ChatGPT or Lama and ask a question, by default the model responds based on its ‘parametric memory’ – i.e. knowledge stored in its parameters as a result of training on massive data from the Internet,” he explained David Wadden, a research scientist at AI2, the artificial intelligence research arm of the nonprofit Allen Institute. “But just as you are likely to give more accurate answers if you have a source of information in front of you (e.g. a book or file), the same is true for some models.”

RAG is undeniably useful – it lets you assign things generated by the model to discovered documents to ascertain their veracity (with the additional advantage of avoiding potentially copyright-infringing regurgitations). RAG also allows firms that don’t need their documents for use for model training – say, firms in highly regulated industries comparable to healthcare and law – to permit their models to make use of these documents in a safer and temporary way.

But RAG actually stops the model from hallucinating. It also has limitations that many providers overlook.

Wadden says RAG is best in “knowledge-intensive” scenarios where the user desires to apply the model to fill an “information need” – for instance, to search out out who won the Super Bowl last 12 months. In such scenarios, the document answering the query will likely contain lots of the same keywords as the query (e.g., “Super Bowl,” “last year”), making it relatively easy to search out via keyword search.

Things get harder for reasoning-intensive tasks like coding and math, where in a keyword-based query it’s harder to find out the concepts needed to reply the query, much less determine which documents is perhaps relevant.

Even for basic questions, models can grow to be “distracted” by irrelevant content in the documents, especially long documents where the answer isn’t obvious. Or, for reasons still unknown, they could simply ignore the contents of recovered documents and rely as a substitute on their parametric memory.

RAG can be expensive when it comes to the equipment needed to deploy it on a big scale.

This is because retrieved documents, whether from the Internet, an internal database, or elsewhere, have to be kept in memory – at the very least temporarily – for the model to confer with them again. Another expense is computing the increased context that the model must process before generating a response. For a technology already famous for the large amounts of computing power and electricity required to even perform basic operations, this can be a serious consideration.

This does not imply RAG cannot be improved. Wadden noted many ongoing efforts to coach models to raised leverage documents recovered using RAG.

Some of those efforts include models that may “decide” when to make use of documents, or models that may opt out of search first in the event that they deem it unnecessary. Others are specializing in ways to index massive document datasets more efficiently and to enhance search through higher representations of documents—representations that transcend keywords.

“We’re pretty good at retrieving documents based on keywords, but we’re not very good at retrieving documents based on more abstract concepts, such as the checking technique needed to solve a math problem,” Wadden said. “Research is required to construct document representations and search techniques that may discover suitable documents for more abstract generation tasks. I feel it’s mostly an open query at this point.”

So RAG may help reduce models’ hallucinations, however it isn’t the answer to all hallucinatory problems of AI. Beware of any seller who tries to say otherwise.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Luminar lays off 20% of staff and outsources lidar production

Published

on

By

Lidar Luminar is reducing its workforce by 20% and will rely more on its contract manufacturing partner as part of a restructuring that may shift the corporate to a more “asset-conserving” business model because it seeks to scale production.

The cuts will affect roughly 140 employees and will take effect immediately. Luminar can be cutting ties with “most” of its contract employees.

“Today we stand at the intersection of two realities: the core of our business has never been stronger in technology, products, industrialization and commercialization; at the same time, the capital markets’ perception of our company has never been more challenging,” billionaire founder and CEO Austin Russell said in a speech letter posted on the Luminar website. “The business model and cost structure that enabled us to achieve this leadership position no longer meet the needs of the company.”

Russell wrote within the letter that the restructuring will enable Luminar to bring products to market faster, “drastically reduce” costs and ensure improved profitability for the corporate. The company said in its regulatory filing sawing that the changes would scale back operating costs “by $50 million to $65 million annually.” The company can be limiting its global reach “by subleasing some or all of certain facilities.”

Luminar will proceed to operate its Florida facility, which is used for development, testing and research and development purposes, in keeping with spokesman Milin Mehta.

In April, Luminar announced that it had begun shipping volume production lidar sensors to Volvo for installation within the automaker’s EX90 luxury SUV. It also announced plans to deepen its relationship with Taiwanese contract manufacturing company TPK Holding. TPK “has committed to an exclusive partnership with Luminar,” Russell wrote in his letter.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Iconiq raises $5.15 billion for seventh flagship fund

Published

on

By

SEC filings show Iconiq Capital raised $5.15 billion across two funds affiliated with its seventh family of growth funds.

The company, which began in 2011 as a personal office managing the capital of a number of the most outstanding and wealthiest figures within the technology industry, including Mark Zuckerberg and Jack Dorsey, originally targeted amount of USD 5.75 billion. “Wall Street” every day announced in March 2022. It is unclear whether the corporate continues to be raising capital for its goal.

Iconiq didn’t immediately reply to a request for comment.

The fund size represents a major increase over Iconiq Fund VI’s $3.75 billion goal.

Iconiq’s latest fund transfer is impressive considering many other high-growth investors have failed to realize their goals over the long run. Most notably, Tiger Global closed its latest enterprise capital fund at $2.2 billion, the corporate’s smallest fund since 2014. Bloomberg reported. Tiger initially planned to lift $6 billion, lower than half of its predecessor’s total $12.7 billion the corporate closed in March 2022.

The two giant funds are usually not in the exact same situation. Global Tiger was widely criticized for investing capital too quickly at exorbitant prices in the course of the tech boom of 2020 and 2021 (though the notion that it was overpaying has at all times been denied). Unlike Tiger Global, which has been actively selling additional shares to secure liquidity, Iconiq is buying secondary positions, in keeping with two sources.

Iconiq’s substantive fundraising likely means its backers are relatively joyful with the corporate’s investment strategy.

According to PitchBook data, Iconiq has accomplished several dozen exits from its portfolio lately, including: IPO of Snowflake, Airbnb, GitLab and HashiCorp. In 2023, Iconiq invested $1.1 billion in 22 corporations, – he says and his portfolio includes, amongst others: startups like wire, Canva, Ramp, ServiceTitan, Writer AND Pigment.

Business Fund VII-B raised $3.95 billion from 291 investors, nonetheless Fund VII closed at $1.26 billion from 462 donors, in keeping with official documents.

According to the Iconiq report, the seventh Iconiq vehicle will put money into 20-25 technology corporations Insider Buyback Report based on the New Mexico Investment Board meeting held in March 2022.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending