google-site-verification=cXrcMGa94PjI5BEhkIFIyc9eZiIwZzNJc4mTXSXtGRM Black Girls Code announces coding program for young developers - 360WISE MEDIA
Connect with us

Technology

Black Girls Code announces coding program for young developers

Published

on

Black Girls Code, Cristina Jones, Kalani Jewel

Code Along Jr. is a continuation of Black Girls Code’s Code Along program and is aimed toward young programmers aged 7 to 10.


Black Girls Code recently continued its partnership with GoldieBlox as a part of their shared mission to teach and encourage the following generation of developers through the free video-based academy Code Along. Their latest collaboration, Code Along Jr, launched at a live event in March in Los Angeles, is aimed toward young developers, in line with an April 2 press release. The host shall be child actress Kalani Jewel. It was designed for children aged seven to 10.

Black Girls Code CEO Christina Jones stated in a press release that the group’s latest enterprise and hosted by perfectly fitted.

“Together we can change the face of technology,” Jones said. “Kalani, as a lively and energetic 12-year-old, is the perfect host for Code Along Jr. It shows girls that technology is cool, not scary. It reaches them at their level and shows them that they can do anything they want. This is incredibly important because technology is at the heart of everything we do and Black girls absolutely have so much to offer as entrepreneurs, executives, creators and artists of the future.”

GoldieBlox founder and CEO Debra Sterling said in a press release that the partnership with Black Girls Code provides similar synergy to the collaboration between Jewel and Black Girls Code.

“GoldieBlox lives at the intersection of technology, fun and female empowerment,” Sterling said. “Collaborating with Black Girls Code on Code Along Jr. is a natural solution for us. We are excited to help girls experience the magic of technology-enabled creativity and support them on their educational journey.”

GoldieBlox is a toy company that focuses on interactive toys for girls. According to her websiteits mission “is to reach girls at a young age and introduce them (and often their parents) to science, technology, engineering and math.”

The Black Girl Code is filling a critical need within the technology industry. As reported, only 2% of STEM employees are black women. Moreover, only two Fortune 500 corporations have Black women as CEOs.

Jones told Fortune that the group’s mission is to prioritize inclusion and variety by empowering Black girls to feel a way of belonging within the tech industry despite the dismal variety of Black women within the space.

“We need girls in the room; we need more integration in the room,” Jones said. “But we do that by creating a space where people themselves can understand what their own value proposition is — that they are in this room because they are smart and capable and creative.”

Jones continued, “I actually cannot express how inspiring it was to face there and see little Black girls are available saying they were enthusiastic about coding and leave saying they wanted to return back each day for the remaining of their classes. life.”

Jones also described her ambitions: “I want to create tables for these girls, so they can create tables, so we can affect change on a large scale. Because momentum is important – once we hopefully get going, it will be like a snowball effect.”

We encourage young programmers to follow and subscribe to the web site Black Girls Code YouTube Channel and follow him on social media for updates on each Code Along projects.


This article was originally published on : www.blackenterprise.com
Continue Reading
Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Change Healthcare hackers breached using stolen credentials – no MFA, says UHG CEO

Published

on

By

The ransomware gang that breached US health tech giant Change Healthcare used a set of stolen credentials to remotely access the corporate’s systems that weren’t protected by multi-factor authentication, in response to the CEO of parent company UnitedHealth.

UnitedHealth CEO Andrew Witty gave written testimony ahead of Wednesday’s House subcommittee hearing on the February ransomware attack that caused months of disruption to the U.S. health care system.

For the primary time, the medical health insurance giant has assessed how hackers breached Change Healthcare’s systems, during which huge amounts of health data were extracted from its systems. Last week, UnitedHealth said hackers had stolen health data for “a significant portion of people in America.”

Change Healthcare processes medical health insurance claims and billing for about half of all U.S. residents.

According to Witty’s testimony, the hackers “used the compromised credentials to gain remote access to the Change Healthcare Citrix portal.” Organizations like Change use Citrix software to enable employees to remotely access work computers on internal networks.

Witty didn’t explain intimately how the credentials were stolen. Wall Street Journal was the primary to report the hacker’s use of compromised credentials last week.

Witty, nevertheless, said the portal “lacks multi-factor authentication,” which is a basic security feature that forestalls the misuse of stolen passwords by requiring a second code to be sent to an worker’s trusted device, comparable to a phone. It’s unclear why Change didn’t arrange multi-factor authentication on this technique, but it surely’s prone to be of interest to investigators trying to grasp potential deficiencies within the insurer’s systems.

“Once the attacker gained access, they moved around systems and extracted data in a more sophisticated way,” Witty said.

Witty said hackers deployed the ransomware nine days afterward Feb. 21, prompting the health care giant to shut down its network to contain the breach.

Last week, UnitedHealth confirmed that the corporate had paid a ransom to hackers who claimed responsibility for the cyberattack and subsequent theft of terabytes of stolen data. The hackers, generally known as RansomHub, are the second gang to say data theft after they posted among the stolen data on the dark web and demanded a ransom for not selling the data.

Earlier this month, UnitedHealth said a ransomware attack cost it greater than $870 million in the primary quarter through which the corporate had revenue of nearly $100 billion.

This article was originally published on : techcrunch.com
Continue Reading

Technology

Google Gemini: everything you need to know about the new generative artificial intelligence platform

Published

on

By

Google is trying to impress with Gemini, its flagship suite of generative AI models, applications and services.

So what are Gemini? How can you use it? And how does it compare to the competition?

To help you sustain with the latest Gemini developments, we have created this handy guide, which we’ll keep updating as new Gemini models, features, and news about Google’s plans for Gemini grow to be available.

What is Gemini?

Gemini is owned by Google long promised, a family of next-generation GenAI models developed by Google’s artificial intelligence labs DeepMind and Google Research. It is available in three flavors:

  • Gemini Ultrathe best Gemini model.
  • Gemini Pro“lite” Gemini model.
  • Gemini Nanoa smaller “distilled” model that works on mobile devices like the Pixel 8 Pro.

All Gemini models were trained to be “natively multimodal” – in other words, able to work with and use greater than just words. They were pre-trained and tuned based on various audio files, images and videos, a big set of codebases and text in various languages.

This distinguishes Gemini from models akin to Google’s LaMDA, which was trained solely on text data. LaMDA cannot understand or generate anything beyond text (e.g. essays, email drafts), but this isn’t the case with Gemini models.

What is the difference between Gemini Apps and Gemini Models?

Image credits: Google

Google, proving once more that it has no talent for branding, didn’t make it clear from the starting that Gemini was separate and distinct from the Gemini web and mobile app (formerly Bard). Gemini Apps is solely an interface through which you can access certain Gemini models – consider it like Google’s GenAI client.

Incidentally, Gemini applications and models are also completely independent of Imagen 2, Google’s text-to-image model available in a few of the company’s development tools and environments.

What can Gemini do?

Because Gemini models are multimodal, they will theoretically perform a spread of multimodal tasks, from transcribing speech to adding captions to images and videos to creating graphics. Some of those features have already reached the product stage (more on that later), and Google guarantees that each one of them – and more – can be available in the near future.

Of course, it is a bit difficult to take the company’s word for it.

Google seriously fell in need of expectations when it got here to the original Bard launch. Recently, it caused a stir by publishing a video purporting to show the capabilities of Gemini, which turned out to be highly fabricated and kind of aspirational.

Still, assuming Google is kind of honest in its claims, here’s what the various tiers of Gemini will give you the option to do once they reach their full potential:

Gemini Ultra

Google claims that Gemini Ultra – thanks to its multimodality – may help with physics homework, solve step-by-step problems in a worksheet and indicate possible errors in already accomplished answers.

Gemini Ultra can be used for tasks akin to identifying scientific articles relevant to a selected problem, Google says, extracting information from those articles and “updating” a graph from one by generating the formulas needed to recreate the graph with newer data.

Gemini Ultra technically supports image generation as mentioned earlier. However, this feature has not yet been implemented in the finished model – perhaps because the mechanism is more complex than the way applications akin to ChatGPT generate images. Instead of passing hints to a picture generator (akin to DALL-E 3 for ChatGPT), Gemini generates images “natively” with no intermediate step.

Gemini Ultra is accessible as an API through Vertex AI, Google’s fully managed platform for AI developers, and AI Studio, Google’s online tool for application and platform developers. It also supports Gemini apps – but not totally free. Access to Gemini Ultra through what Google calls Gemini Advanced requires a subscription to the Google One AI premium plan, which is priced at $20 monthly.

The AI ​​Premium plan also connects Gemini to your broader Google Workspace account—think emails in Gmail, documents in Docs, presentations in Sheets, and Google Meet recordings. This is useful, for instance, when Gemini is summarizing emails or taking notes during a video call.

Gemini Pro

Google claims that Gemini Pro is an improvement over LaMDA by way of inference, planning and understanding capabilities.

Independent test by Carnegie Mellon and BerriAI researchers found that the initial version of Gemini Pro was actually higher than OpenAI’s GPT-3.5 at handling longer and more complex reasoning chains. However, the study also found that, like all major language models, this version of Gemini Pro particularly struggled with math problems involving several digits, and users found examples of faulty reasoning and obvious errors.

However, Google promised countermeasures – and the first one got here in the type of Gemini 1.5 Pro.

Designed as a drop-in substitute, Gemini 1.5 Pro has been improved in lots of areas compared to its predecessor, perhaps most notably in the amount of information it could actually process. Gemini 1.5 Pro can write ~700,000 words or ~30,000 lines of code – 35 times greater than Gemini 1.0 Pro. Moreover – the model is multimodal – it isn’t limited to text. Gemini 1.5 Pro can analyze up to 11 hours of audio or an hour of video in various languages, albeit at a slow pace (e.g., looking for a scene in an hour-long movie takes 30 seconds to a minute).

Gemini 1.5 Pro entered public preview on Vertex AI in April.

An additional endpoint, Gemini Pro Vision, can process text images – including photos and videos – and display text according to the GPT-4 model with Vision OpenAI.

Twins

Using Gemini Pro with Vertex AI. Image credits: Twins

Within Vertex AI, developers can tailor Gemini Pro to specific contexts and use cases through a tuning or “grounding” process. Gemini Pro can be connected to external third-party APIs to perform specific actions.

AI Studio includes workflows for creating structured chat prompts using Gemini Pro. Developers have access to each Gemini Pro and Gemini Pro Vision endpoints and might adjust model temperature to control creative scope and supply examples with tone and elegance instructions, in addition to fine-tune security settings.

Gemini Nano

The Gemini Nano is a much smaller version of the Gemini Pro and Ultra models, and is powerful enough to run directly on (some) phones, slightly than sending the job to a server somewhere. So far, it supports several features on the Pixel 8 Pro, Pixel 8, and Samsung Galaxy S24, including Summarize in Recorder and Smart Reply in Gboard.

The Recorder app, which allows users to record and transcribe audio with the touch of a button, provides a Gemini-powered summary of recorded conversations, interviews, presentations and more. Users receive these summaries even in the event that they do not have a signal or Wi-Fi connection available – and in a nod to privacy, no data leaves their phone.

Gemini Nano can be available on Gboard, Google’s keyboard app. There, it supports a feature called Smart Reply that helps you suggest the next thing you’ll want to say while chatting in the messaging app. The feature initially only works with WhatsApp, but can be available in additional apps over time, Google says.

In the Google News app on supported devices, the Nano enables Magic Compose, which allows you to compose messages in styles akin to “excited”, “formal”, and “lyrical”.

Is Gemini higher than OpenAI’s GPT-4?

Google has had this occur a number of times advertised Gemini’s benchmarking superiority, claiming that Gemini Ultra outperforms current state-of-the-art results on “30 of 32 commonly used academic benchmarks used in the research and development of large language models.” Meanwhile, the company claims that Gemini 1.5 Pro is best able to perform tasks akin to summarizing content, brainstorming, and writing higher than Gemini Ultra in some situations; it will probably change with the premiere of the next Ultra model.

However, leaving aside the query of whether the benchmarks actually indicate a greater model, the results that Google indicates appear to be only barely higher than the corresponding OpenAI models. And – as mentioned earlier – some initial impressions weren’t great, each amongst users and others scientists mentioning that the older version of Gemini Pro tends to misinterpret basic facts, has translation issues, and provides poor coding suggestions.

How much does Gemini cost?

Gemini 1.5 Pro is free to use in Gemini apps and, for now, in AI Studio and Vertex AI.

However, when Gemini 1.5 Pro leaves the preview in Vertex, the model will cost $0.0025 per character, while the output will cost $0.00005 per character. Vertex customers pay per 1,000 characters (roughly 140 to 250 words) and, for models like the Gemini Pro Vision, per image ($0.0025).

Let’s assume a 500-word article incorporates 2,000 characters. To summarize this text with the Gemini 1.5 Pro will cost $5. Meanwhile, generating an article of comparable length will cost $0.1.

Pricing for the Ultra has not yet been announced.

Where can you try Gemini?

Gemini Pro

The easiest place to use Gemini Pro is in the Gemini apps. Pro and Ultra respond to queries in multiple languages.

Gemini Pro and Ultra are also available in preview on Vertex AI via API. The API is currently free to use “within limits” and supports certain regions including Europe, in addition to features akin to chat and filtering.

Elsewhere, Gemini Pro and Ultra might be present in AI Studio. Using this service, developers can iterate on Gemini-based prompts and chatbots after which obtain API keys to use them of their applications or export the code to a more complete IDE.

Code Assistant (formerly AI duo for programmers), Google’s suite of AI-based code completion and generation tools uses Gemini models. Developers could make “large-scale” changes to code bases, akin to updating file dependencies and reviewing large snippets of code.

Google has introduced Gemini models in its development tools for the Chrome and Firebase mobile development platform and database creation and management tools. It has also introduced new security products based on Gemini technology, e.g Gemini in Threat Intelligence, a component of Google’s Mandiant cybersecurity platform that may analyze large chunks of probably malicious code and enable users to search in natural language for persistent threats or indicators of compromise.

This article was originally published on : techcrunch.com
Continue Reading

Technology

NIST Launches New Generative Artificial Intelligence Assessment Platform

Published

on

By

The National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce that develops and tests technologies for the U.S. government, businesses and most of the people, announced on Monday the launch of NIST GenAI, a brand new program led by NIST to judge generative technologies Artificial intelligence technologies, including artificial intelligence generating text and pictures.

NIST GenAI will publish benchmarks, help create systems for detecting “content authenticity” (i.e., deep-check false information), and encourage the event of software that detects the source of false or misleading information generated by artificial intelligence, NIST explains on newly launched NIST GenAI website and press release.

“The NIST GenAI program will publish a series of challenges designed to assess and measure the capabilities and limitations of generative artificial intelligence technologies,” the press release reads. “These assessments will be used to identify strategies to promote information integrity and guidance for the safe and responsible use of digital content.”

The first NIST GenAI project is a pilot study to construct systems that may reliably distinguish human-generated media from AI-generated media, starting with text. (While many services aim to detect deepfakes, research and our own testing have shown that they’re unreliable, especially in terms of text.) NIST GenAI is inviting teams from academia, industry, and research labs to submit “generators” – AI systems to content generation – i.e. “discriminators”, i.e. systems that attempt to discover content generated by artificial intelligence.

Generators within the study must generate summaries given a subject and set of documents, while discriminators must detect whether a given summary is written by artificial intelligence. To ensure fairness, NIST GenAI will provide data vital to coach generators and discriminators; systems trained on publicly available data is not going to be accepted, including but not limited to open models akin to Meta’s Llama 3.

Registration for the pilot will begin on May 1, and the outcomes might be announced in February 2025.

The launch of NIST GenAI and study specializing in deepfakes comes at a time of exponential growth within the variety of deepfakes.

According to data from Clarity, a deepfake detection company, 900% more deepfakes have been created this yr in comparison with the identical period last yr. This causes concern, which is comprehensible. AND last vote from YouGov discovered it 85% of Americans said they were concerned regarding the spread of misleading deepfakes on the Internet.

The launch of NIST GenAI is an element of NIST’s response to President Joe Biden’s Executive Order on Artificial Intelligence, which sets rules requiring AI firms to be more transparent about how their models perform and establishes quite a lot of recent standards, including for labeling AI-generated content intelligence .

This can be NIST’s first AI-related announcement following the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Security Institute.

Christiano was a controversial alternative as a consequence of his “doomeristic” views; he once predicted that “there is a 50% chance that the development of artificial intelligence will end in (the destruction of humanity).” Criticsreportedly including scientists at NIST, they fear that Cristiano may encourage the AI ​​Security Institute to concentrate on “fantasy scenarios” reasonably than realistic, more immediate threats from artificial intelligence.

NIST says NIST GenAI will report on the work of the AI ​​Security Institute.

This article was originally published on : techcrunch.com
Continue Reading
Advertisement

OUR NEWSLETTER

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending