Connect with us

Technology

This week in AI: Generative AI spams academic journals

Published

on

Hello guys, welcome to TechCrunch’s regular AI newsletter.

This week in AI, generative AI is beginning to spam academic publications, a discouraging recent development on the disinformation front.

IN post about retraction watchblog tracking the recent retreat from academic research, assistant professors of philosophy Tomasz Żuradzk and Leszek Wroński wrote about three journals published by Addleton Academic Publishers that appear to consist entirely of AI-generated articles.

Magazines feature articles following the identical template, stuffed with buzzwords like “blockchain,” “metaverse,” “internet of things,” and “deep learning.” They list the identical editorial committee – 10 members of that are deceased – and an not easily seen address in Queens, New York, that appears to be home.

So what is going on on? You can ask. Isn’t viewing AI-generated spam simply a value of doing business online lately?

Yeah. But the fake journals show how easily the systems used to guage researchers for promotions and tenure might be fooled – and this could possibly be a motivator for knowledge employees in other industries.

In a minimum of one widely used rating system, CiteScore, journals rank in the highest ten for philosophy research. How is it possible? They quote one another at length. (CiteScore includes citations in its calculations). Żuradzk and Wroński state that of the 541 citations in considered one of Addleton’s journals, 208 come from other false publications of the publisher.

“(These rankings) often serve as indicators of research quality for universities and funding institutions,” Żuradzk and Wroński wrote. “They play a key role in decisions about academic rewards, hiring and promotion, and thus can influence researchers’ publication strategies.”

You could argue that CiteScore is the issue – it’s clearly a flawed metric. And this just isn’t a false argument. However, it just isn’t incorrect to say that generative AI and its abuse are disrupting the systems on which individuals’s lives depend in unexpected – and potentially quite harmful – ways.

There is a future in which generative AI forces us to rethink and redesign systems like CiteScore to be more equitable, holistic and inclusive. The bleaker alternative – and the one which exists today – is a future in which generative AI continues to run amok, wreaking havoc and ruining work lives.

I hope we’ll correct course soon.

News

DeepMind soundtrack generator: DeepMind, Google’s artificial intelligence research lab, says it’s developing artificial intelligence technology to generate movie soundtracks. DeepMind’s AI combines audio descriptions (e.g. “jellyfish pulsating underwater, sea life, ocean”) with the video to create music, sound effects, and even dialogue that match the characters and tone of the video.

Robot chauffeur: : Researchers on the University of Tokyo have developed and trained a “musculoskeletal humanoid” named Musashi to drive a small electric automobile around a test track. Equipped with two cameras replacing human eyes, Musashi can “see” the road ahead and the views reflected in the automobile’s side mirrors.

New AI search engine: : Genspark, a brand new AI-powered search platform, uses generative AI to create custom summaries in response to queries. To date, $60 million has been raised from investors including Lanchi Ventures; In its latest round of financing, the corporate valued it at $260 million post-acquisition, which is decent considering Genspark competes with rivals like Perplexity.

How much does ChatGPT cost?: How much does ChatGPT, OpenAI’s ever-expanding AI-powered chatbot platform, cost? Answering this query is tougher than you think that. To keep track of different ChatGPT subscription options available, we have prepared an updated ChatGPT pricing guide.

Science article of the week

Autonomous vehicles face an infinite number of edge cases, depending on location and situation. If you are driving on a two-lane road and someone activates their left turn signal, does that mean they’ll change lanes? Or that it is best to pass them on? The answer may depend upon whether you are driving on I-5 or the highway.

A gaggle of researchers from Nvidia, USC, UW and Stanford show in a paper just published in CVPR that many ambiguous or unusual circumstances might be solved by, should you can consider it, asking an AI to read the local drivers’ handbook.

Their Big Tongue Driver Assistant or LLaDa, gives the LLM access – even without the power to specify – driving instructions for a state, country or region. Local rules, customs or signs might be found in the literature, and when an unexpected circumstance occurs, e.g. a horn, traffic lights or a flock of sheep, an appropriate motion is generated (stop, stop, honk).

Image credits: Nvidia

This is certainly not an entire, comprehensive driving system, nevertheless it shows another path to a “universal” driving system that also encounters surprises. Or perhaps it is a way for the remainder of us to search out out why people honk at us after we visit unknown sites.

Model of the week

On Monday, Runway, an organization that creates generative AI tools aimed toward creators of film and image content, presented Gen-3 Alpha. Trained on an enormous variety of images and videos from each public and internal sources, Gen-3 can generate video clips based on text descriptions and still images.

Runway claims Gen-3 Alpha delivers “significant” improvements in generation speed and fidelity over Runway’s previous video flagship, Gen-2, in addition to precise control over the structure, style and motion of the videos it creates. Runway says Gen-3 can be customized to offer more “stylistically controlled” and consistent characters, guided by “specific artistic and narrative requirements.”

The Gen-3 Alpha has its limitations – including the indisputable fact that footage lasts a maximum of 10 seconds. But Runway co-founder Anastasis Germanidis guarantees that that is just the primary of several video-generating models to come back in a family of next-generation models trained on Runway’s improved infrastructure.

Gen-3 Alpha is the most recent of several generative video systems to hit the scene in recent months. Others include OpenAI’s Sora, Luma’s Dream Machine, and Google’s Veo. Together, they threaten to upend the film and tv industry as we understand it – assuming they will overcome copyright challenges.

Take the bag

AI won’t take your next McDonald’s order.

McDonald’s this week announced that it should remove automated order-taking technology, which the fast food chain has been testing for the higher a part of three years, from greater than 100 of its restaurants. The technology — developed with IBM and installed in drive-thru restaurants — became popular last 12 months due to its tendency to misunderstand customers and make mistakes.

Recent piece in Takeout suggests that artificial intelligence is losing its grip on fast food operators, who’ve recently expressed enthusiasm for the technology and its potential to extend efficiency (and reduce labor costs). Presto, a significant player in the AI-powered drive-thru lane market, recently lost a significant customer, Del Taco, and is facing mounting losses.

The problem is inaccuracy.

McDonald’s CEO Chris Kempczinski he said CNBC in June 2021 found that its voice recognition technology was accurate about 85% of the time, but that it needed to be assisted by human staff for about one in five orders. Meanwhile, in keeping with Takeout, one of the best version of the Presto system processes only about 30% of orders without human assistance.

So, so long as artificial intelligence is there decimating some segments of the gig economy appear to think that certain jobs – especially people who require understanding a wide range of accents and dialects – can’t be automated. At least for now.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Introducing the Next Wave of Startup Battlefield Judges at TechCrunch Disrupt 2024

Published

on

By

Announcing our next wave of Startup Battlefield judges at TechCrunch Disrupt 2024

Startup Battlefield 200 is the highlight of every Disrupt, and we will’t wait to search out out which of the 1000’s of startups which have invited us to collaborate can have the probability to pitch to top enterprise capitalists at TechCrunch Disrupt 2024. Join us at Moscone West in San Francisco October 28–30 for an epic showdown where everyone can have the probability to make a major impact.

Get insight into what the judges are in search of in a profitable company as they supply detailed feedback on the evaluation criteria. Don’t miss the opportunity to learn from their expert insights and discover the key characteristics that result in startup success, only at Disrupt 2024.

We’re excited to introduce our next group of investors who will evaluate startups and dive into each pitch in an in-depth and insightful Q&A session. Stay tuned for more big names coming soon!

Alice Brooks, Partner, Khosla Ventures

Alicja is a partner in Khosla’s ventures interests in sustainability, food, agriculture, and manufacturing/supply chain. She has worked with multiple startups in robotics, IoT, retail, consumer goods, and STEM education, and led mechanical, electrical, and application development teams in the US and Asia. She also founded and managed manufacturing operations in factories in China and Taiwan. Prior to KV, Alice was the founder and CEO of Roominate, a STEM education company that helps girls learn engineering concepts through play.

Mark Crane, Partner, General Catalyst

Mark Crane is a partner at General Catalysta enterprise capital firm that works with founders from seed to endurance to assist them construct corporations that may stand the test of time. Focused on acquiring and investing in later-stage investment opportunities equivalent to AuthZed, Bugcrowd, Resilience, and TravelPerk. Prior to joining General Catalyst, Mark was a vice chairman at Cove Hill Partners in Massachusetts. Prior to that, he was a senior associate at JMI Equity and an associate at North Bridge Growth Equity.

Sofia Dolfe, Partner, Index Ventures

Sofia partners with founders who use their unique perspective and private understanding of the problem to construct corporations that drive behavioral change, powerful network effects, and transform entire industries, from grocery and e-commerce to financial services and healthcare. Sofia can also be one of Index projects‘ gaming leads, working with some of the best gaming corporations in Europe, making a recent generation of iconic gaming titles. He spends most of his time in the Nordics, but works with entrepreneurs across the continent.

Christine Esserman, Partner, Accel

Christine Esserman joined Acceleration in 2017 and focuses on software, web, and mobile technology corporations. Since joining Accel, Christine has helped lead Accel’s investments in Blackpoint Cyber, Linear, Merge, ThreeFlow, Bumble, Remote, Dovetail, Ethos, Guru, and Headway. Prior to joining Accel, Christine worked in product and operations roles at multiple startups. A native of the Bay Area, Christine graduated from the Wharton School at the University of Pennsylvania with a level in Finance and Operations.

Haomiao Huang, Founding Partner, Matter Venture Partners

Haomiao from Venture Matter Partners is a robotics researcher turned founder turned investor. He is especially obsessed with corporations that bring digital innovation to physical economy enterprises, with a give attention to sectors equivalent to logistics, manufacturing and transportation, and advanced technologies equivalent to robotics and AI. Haomiao spent 4 years investing in hard tech with Wen Hsieh at Kleiner Perkins. He previously founded smart home security startup Kuna, built autonomous cars at Caltech and, as part of his PhD research at Stanford, pioneered the aerodynamics and control of multi-rotor unmanned aerial vehicles. Kuna was part of the Y Combinator Winter 14 cohort.

Don’t miss it!

The Startup Battlefield winner, who will walk away with a $100,000 money prize, can be announced at Disrupt 2024—the epicenter of startups. Join 10,000 attendees to witness this breakthrough moment and see the next wave of tech innovation.

Register here and secure your spot to witness this epic battle of startups.

This article was originally published on : techcrunch.com
Continue Reading

Technology

India Considers Easing Market Share Caps for UPI Payments Operators

Published

on

By

phonepe UPI being used to accept payments at a road-side sunglasses stall.

The regulator that oversees India’s popular UPI rail payments is considering relaxing a proposed market share cap for operators like Google Pay, PhonePe and Paytm because it grapples with enforcing the restrictions, two people accustomed to the matter told TechCrunch.

The National Payments Corporation of India (NPCI), which is regulated by the Indian central bank, is considering increasing the market share that UPI operators can hold to greater than 40%, said two of the people, requesting anonymity because the knowledge is confidential. The regulator had earlier proposed a 30% market share limit to encourage competition within the space.

UPI has change into the most well-liked option to send and receive money in India, with the mechanism processing over 12 billion transactions monthly. Walmart-backed PhonePe has about 48% market share by volume and 50% by value, while Google Pay has 37.3% share by volume.

Once an industry heavyweight, Paytm’s market share has fallen to 7.2% from 11% late last yr amid regulatory challenges.

According to several industry executives, the NPCI’s increase in market share limits is more likely to be a controversial move as many UPI providers were counting on regulatory motion to curb the dominance of PhonePe and Google Pay.

NPCI, which has previously declined to comment on market share, didn’t reply to a request for comment on Thursday.

The regulator originally planned to implement the market share caps in January 2021 but prolonged the deadline to January 1, 2025. The regulator has struggled to seek out a workable option to implement its proposed market share caps.

The stakes are high, especially for PhonePe, India’s Most worthy fintech startup, valued at $12 billion.

Sameer Nigam, co-founder and CEO of PhonePe, said last month that the startup cannot go public “if there is uncertainty on regulatory issues.”

“If you buy a share at Rs 100 and value it assuming we have 48-49% market share, there is uncertainty whether it will come down to 30% and when,” Nigam told a fintech conference last month. “We are reaching out to them (the regulator) whether they can find another way to at least address any concerns they have or tell us what the list of concerns is,” he added.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Bluesky addresses trust and security issues related to abuse, spam and more

Published

on

By

Bluesky butterfly logo and Jay Graber

Social media startup Bluesky, which is constructing a decentralized alternative to X (formerly Twitter), provided an update Wednesday on the way it’s approaching various trust and security issues on its platform. The company is in various stages of developing and piloting a variety of initiatives focused on coping with bad actors, harassment, spam, fake accounts, video security and more.

To address malicious users or those that harass others, Bluesky says it’s developing recent tools that can have the option to detect when multiple recent accounts are created and managed by the identical person. This could help curb harassment when a foul actor creates several different personas to attack their victims.

Another recent experiment will help detect “rude” replies and forward them to server moderators. Like Mastodon, Bluesky will support a network where self-hosters and other developers can run their very own servers that connect to Bluesky’s server and others on the network. This federation capability is still in early access. But in the long term, server moderators will have the option to resolve how they need to take care of individuals who post rude responses. In the meantime, Bluesky will eventually reduce the visibility of those responses on its app. Repeated rude labels on content will even lead to account-level labels and suspensions, it says.

To curb using lists to harass others, Bluesky will remove individual users from the list in the event that they block the list creator. Similar functionality was recently introduced to Starter Packs, a sort of shared list that will help recent users find people to follow on the platform (check TechCrunch Starter Pack).

Bluesky will even scan lists with offensive names or descriptions to limit the potential of harassing others by adding them to a public list with a toxic or offensive name or description. Those who violate Bluesky’s Community Guidelines might be hidden from the app until the list owner makes changes that align with Bluesky’s policies. Users who proceed to create offensive lists will even face further motion, though the corporate didn’t provide details, adding that the lists are still an area of ​​energetic discussion and development.

In the approaching months, Bluesky also intends to move to handling moderation reports through its app, using notifications relatively than counting on email reports.

To combat spam and other fake accounts, Bluesky is launching a pilot that can attempt to routinely detect when an account is fake, scamming or sending spam to users. Combined with moderation, the goal is to have the option to take motion on accounts inside “seconds of receiving a report,” the corporate said.

One of the more interesting developments is how Bluesky will comply with local laws while still allowing free speech. It will use geotags that allow it to hide some content from users in a particular area to comply with the law.

“This allows Bluesky’s moderation service to maintain flexibility in creating spaces for free expression while also ensuring legal compliance so that Bluesky can continue to operate as a service in these geographic regions,” the corporate shared in a blog post. “This feature will be rolled out on a country-by-country basis, and we will endeavor to inform users of the source of legal requests when legally possible.”

To address potential trust and safety issues with videos which have recently been added, the team is adding features like the flexibility to disable autoplay, ensuring videos are labeled, and providing the flexibility to report videos. They are still evaluating what else might need to be added, which might be prioritized based on user feedback.

When it comes to abuse, the corporate says its general framework is “a question of how often something happens versus how harmful it is.” The company focuses on addressing high-impact, high-frequency issues, in addition to “tracking edge cases that could result in significant harm to a few users.” The latter, while only affecting a small number of individuals, causes enough “ongoing harm” that Bluesky will take motion to prevent abuse, it says.

User concerns will be reported via reports, emails and mentions @safety.bsky.app account.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending