Connect with us

Technology

Women in AI: Anna Korhonen explores the intersection of linguistics and artificial intelligence

Published

on

To give female AI academics and others their well-deserved – and overdue – time in the highlight, TechCrunch is launching a series of interviews specializing in the extraordinary women who’re contributing to the AI ​​revolution. As the AI ​​boom continues, we are going to publish several articles throughout the 12 months, highlighting key work that always goes unnoticed. Read more profiles here.

Anna Korhonen is a professor of natural language processing (NLP) at the University of Adam Mickiewicz University in Poznań University of Cambridge. She is also senior research fellow at Churchill Collegemember of the Association of Computational Linguistics, and worker of the European Laboratory for Learning and Intelligent Systems.

Korhonen was previously an worker Alan Turing, holds a PhD in computer science and a master’s degree in computer science and linguistics. She researches NLP and how developing, adapting and applying computational techniques to satisfy the needs of artificial intelligence. He has special interests in responsible and “human-centered” NLP, which, as she says, “draws on an understanding of human cognitive, social and creative intelligence.”

Advertisement

Questions & Answers

Briefly speaking, how did you start in artificial intelligence? What drew you to the field?

I even have at all times been fascinated by the beauty and complexity of human intelligence, especially because it pertains to human language. However, my interest in STEM subjects and practical applications led me to review engineering and computer science. I selected to specialize in AI since it is a field that enables me to mix all of these interests.

What work in AI are you most proud of?

While learning easy methods to construct intelligent machines is fascinating and it will probably be easy to wander away in the world of language modeling, the ultimate reason we construct artificial intelligence is its practical potential. I’m most proud of the work where my fundamental research in natural language processing has led to the development of tools that may support social and global good. For example, tools that may help us higher understand how diseases like cancer and dementia develop and how they will be treated, or apps that may support education.

Advertisement

Much of my current research is driven by my mission to develop artificial intelligence that may improve people’s lives. Artificial intelligence has enormous positive potential for social and global good. A giant part of my job as an educator is to encourage the next generation of AI scientists and leaders to deal with realizing this potential.

How do you take care of the challenges of the male-dominated tech industry, and by extension, the male-dominated AI industry?

I’m lucky to work in the field of artificial intelligence, where we even have a major female population and well-established support networks. I find them extremely helpful in coping with skilled and personal challenges.

For me, the biggest problem is the way this male-dominated industry sets the agenda around AI. An ideal example is the current arms race to develop increasingly larger artificial intelligence models in any respect costs. This has a profound impact on the priorities of each academia and industry, in addition to wide-ranging socio-economic and environmental implications. Do we want larger models and what are their global costs and advantages? I believe we’d have asked these questions much earlier if we had a greater gender balance on the pitch.

Advertisement

What advice would you give to women wanting to start out working in the AI ​​industry?

Artificial intelligence desperately needs more women in any respect levels, but especially in leadership. The current leadership culture will not be necessarily attractive to women, but lively engagement can change that culture – and ultimately the culture of AI. Women aren’t at all times great at supporting one another. I would love to see a change of attitude in this regard: if we would like to realize a greater gender balance in this field, we want to actively network and help one another.

What are the most pressing issues facing artificial intelligence because it evolves?

Artificial intelligence has developed incredibly quickly: in lower than a decade, it has evolved from an instructional field to a worldwide phenomenon. During this time, most of the effort was dedicated to scaling massive data and computation. Little effort has been put into considering how this technology must be developed to best serve humanity. People have good reason to fret about the safety and credibility of artificial intelligence and its impact on jobs, democracy, the environment and other areas. We must urgently put human needs and security at the heart of AI development.

Advertisement

What issues should AI users remember of?

Current AI, even when it appears very fluid, ultimately lacks human knowledge of the world and the ability to grasp the complex social contexts and norms in which we operate. Even the best technology today makes mistakes, and our ability to stop or predict them is proscribed. Artificial intelligence could be a very great tool for a lot of tasks, but I would not trust it to teach my children or make essential decisions for me. We the people should remain in power.

What is the best method to construct AI responsibly?

Artificial intelligence developers are inclined to take into consideration ethics as an afterthought – after developing the technology. The best method to give it some thought is that any development begins. Questions like: “Do I have a diverse enough team to develop an equitable system?” or “Is my data truly free and representative of all user populations?” or “Are my techniques solid?” you actually should ask at the starting.

Advertisement

While we are able to solve some of this problem through education, we are able to only implement it through regulation. The recent development of national and global regulations around artificial intelligence is essential and must proceed to make sure that future technologies are safer and trustworthy.

How can investors higher promote responsible AI?

Artificial intelligence regulations are emerging and corporations will ultimately should adapt to them. We can see responsible AI as sustainable AI that is actually price investing in.

Advertisement

This article was originally published on : techcrunch.com

Technology

Anysphere, which makes the cursor supposedly collect USD 900 million with a valuation of USD 9 billion

Published

on

By

AI robot face and programming code on a black background.

Anysphere, producer of coding cursor with AI drive, attracted $ 900 million in the recent financing round by Thrive Capital, Financial Times He informed, citing anonymous sources familiar with the contract.

The report said that Andreessen Horowitz (A16Z) and ACCEL also participate in the round, which values ​​about $ 9 billion.

The cursor collected $ 105 million from Thrive, and A16Z with a valuation of $ 2.5 billion, as TechCrunch said in December. Capital Thrive also led this round and in addition participated in A16Z. According to Crunchbase data, the startup has collected over $ 173 million thus far.

Advertisement

It is alleged that investors, including index ventures and a reference point, attempt to support the company, but plainly existing investors don’t want to miss the opportunity to support it.

Other coding start-ups powered by artificial intelligence also attract the interest of investors. Techcrunch announced in February that Windsurf, a rival for Aklesphere, talked about collecting funds at a valuation of $ 3 billion. Openai, an investor in Anysphere, was supposedly I’m attempting to get windsurf for about the same value.

(Tagstransate) A16Z

(*9*)This article was originally published on : techcrunch.com

Continue Reading

Technology

This is the shipping of products from China to the USA

Published

on

By

Shein and Temu icons are seen displayed on a phone screen in this illustration photo

The Chinese retailer has modified the strategy in the face of American tariffs.

Thanks to the executive ordinance, President Donald Trump ended the so -called de minimis principle, which allowed goods value 800 USD or less entering the country without tariffs. It also increases tariffs to Chinese goods by over 100%, forcing each Chinese firms and Shein, in addition to American giants, similar to Amazon to adapt plans and price increases.

CNBC reports that this was also affected, and American buyers see “import fees” from 130% to 150% added to their accounts. Now, nevertheless, the company is not sending the goods directly from China to the United States. Instead, it only displays the offers of products available in American warehouses, while goods sent from China are listed as outside the warehouse.

Advertisement

“He actively recruits American sellers to join the platform,” said the spokesman ago. “The transfer is to help local sellers reach more customers and develop their companies.”

(tagstotransate) tariffs

This article was originally published on : techcrunch.com
Continue Reading

Technology

One of the last AI Google models is worse in terms of safety

Published

on

By

The Google Gemini generative AI logo on a smartphone.

The recently released Google AI model is worse in some security tests than its predecessor, in line with the company’s internal comparative test.

IN Technical report Google, published this week, reveals that his Flash Gemini 2.5 model is more likely that he generates a text that violates its security guidelines than Gemini 2.0 Flash. In two indicators “text security for text” and “image security to the text”, Flash Gemini 2.5 will withdraw 4.1% and 9.6% respectively.

Text safety for the text measures how often the model violates Google guidelines, making an allowance for the prompt, while image security to the text assesses how close the model adheres to those boundaries after displaying the monitors using the image. Both tests are automated, not supervised by man.

Advertisement

In an e-mail, Google spokesman confirmed that Gemini 2.5 Flash “performs worse in terms of text safety for text and image.”

These surprising comparative results appear when AI is passing in order that their models are more acceptable – in other words, less often refuse to answer controversial or sensitive. In the case of the latest Llam Meta models, he said that he fought models in order to not support “some views on others” and answers to more “debated” political hints. Opeli said at the starting of this yr that he would improve future models, in order to not adopt an editorial attitude and offers many prospects on controversial topics.

Sometimes these efforts were refundable. TechCrunch announced on Monday that the default CHATGPT OPENAI power supply model allowed juvenile to generate erotic conversations. Opeli blamed his behavior for a “mistake”.

According to Google Technical Report, Gemini 2.5 Flash, which is still in view, follows instructions more faithfully than Gemini 2.0 Flash, including instructions exceeding problematic lines. The company claims that regression might be partially attributed to false positives, but in addition admits that Gemini 2.5 Flash sometimes generates “content of violation” when it is clearly asked.

Advertisement

TechCrunch event

Berkeley, California
|.
June 5

Book now

Advertisement

“Of course, there is a tension between (after instructions) on sensitive topics and violations of security policy, which is reflected in our assessment,” we read in the report.

The results from Meepmap, reference, which can examine how models react to sensitive and controversial hints, also suggest that Flash Gemini 2.5 is much less willing to refuse to reply controversial questions than Flash Gemini 2.0. Testing the TechCrunch model through the AI ​​OpenRoutter platform has shown that he unsuccessfully writes essays to support human artificial intelligence judges, weakening the protection of due protection in the US and the implementation of universal government supervisory programs.

Thomas Woodside, co -founder of the Secure AI Project, said that the limited details given by Google in their technical report show the need for greater transparency in testing models.

“There is a compromise between the instruction support and the observation of politics, because some users may ask for content that would violate the rules,” said Woodside Techcrunch. “In this case, the latest Flash model Google warns the instructions more, while breaking more. Google does not present many details about specific cases in which the rules have been violated, although they claim that they are not serious. Not knowing more, independent analysts are difficult to know if there is a problem.”

Advertisement

Google was already under fire for his models of security reporting practices.

The company took weeks to publish a technical report for the most talented model, Gemini 2.5 Pro. When the report was finally published, it initially omitted the key details of the security tests.

On Monday, Google published a more detailed report with additional security information.

(Tagstotransate) Gemini

Advertisement
This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending