Technology
Bitcoin and NFTs could get more legal protection as ‘personal property’ under proposed UK law
UK Government he introduced a brand new bill in parliament that proposes recent legal protections for digital assets such as cryptocurrencies, non-fungible tokens (NFTs) and carbon credits.
The bill comes as the cryptocurrency sector faces a series of regulatory headwinds: In the U.S., the Securities and Exchange Commission (SEC) ruled that some crypto assets are securities, and earlier this 12 months, the SEC approved the primary U.S.-listed exchange-traded fund (ETF) to trace Bitcoin. Meanwhile, the European Union (EU) can also be rolling out recent rules to control cryptocurrencies and make it easier to trace transactions.
Great Britain is we’re working on similar regulationsbut recent Asset Bill (Digital Assets etc.) fairly, it’s about legalizing digital assets as “personal property,” meaning they’ve the identical rank as traditional assets.
The proposed law is a response to Report 2023 from the Law Commission, which outlined the necessity to update the present laws on personal property rights. The report noted:
As technology advances and people spend more time online, our relationship with digital assets will develop into even more essential… Our recommendations also aim to be certain that the private law of England and Wales stays a dynamic, globally competitive and flexible tool for market participants in the world of digital assets.
Law Commission: Digital Assets – Final Report Summary
The concept of “personal property” is significant in law since it plays a central role in legal matters referring to bankruptcy, insolvency, theft, inheritance, divorce proceedings and more. Currently, the law in England and Wales (Scotland and Northern Ireland have separate legal systems) governs two categories of property: tangible goods such as cars, jewellery and money, known as “things in possession”. Separately, “things in action” concerns the protection of intangible assets such as shares, debts and mental property.
That leaves an enormous loophole for “digital” assets like Bitcoin and similar cryptocurrencies, as well as NFTs like digital art (which have modified hands for significant amounts in recent times). This recent, third category, if passed, would bring greater clarity to what constitutes personal property and make it easier for courts to resolve disputes.
For example, a court could issue a freezing order to stop someone from dissipating digital assets before a dispute is resolved, very like a court would for tangible goods. Or if someone steals their digital assets as a part of a fraud, they could pursue greater legal remedies.
Additionally, such a law would mean that digital assets could develop into a part of an individual’s estate for the needs of probate or bankruptcy proceedings.
What’s next?
The bill got here first published in draft form in July, but has now reached the primary reading stage within the House of Lords, where it’s going to must undergo a series of debates and amendments before going to the House of Commons.
There continues to be a protracted strategy to go before the law comes into force, but there are currently around Labour Party majority governmentso there’s a high probability that this bill will ultimately be passed – however it will not be clear in what form and with what provisions.
For example, what will likely be considered “digital assets” under the brand new laws? In theory, the term covers a wide selection of topics, such as email accounts and files, carbon credits and in-game digital assets. The Law Commission I admit this, noting that there’ll likely be “borderline issues” across the digital asset spectrum. He also recommends the so-called “customary law“approach”, indicating that the law may require a trial in court, during which the presiding judge issues a ruling in each individual case to determine precedents as as to if personal property rights ought to be granted to an asset in a specific case.
However, the Ministry of Justice and the Law Commission have been clear that the “main” digital assets they consider are protected are crypto-tokens such as cryptocurrencies and NFTs.
Technology
Police in California don’t love their Tesla police cars
On Thursday evening, Elon Musk showed off Tesla’s latest technology, saying in regards to the sleek Cybercab robots and a prototype for a brand new electric van that “the future should look like the future.”
However, today the Californian police are beginning to regret the choice to exchange their fleet with Tesla Y models. Although climate-friendly – preparing departments for emission-free future – it seems that Teslas pose many other challenges, per Interviews with SFGate with three Northern California police chiefs.
For example, after the modification, the automotive’s rear seat is just too small for multiple passenger and the front seat is just too tight for officers. The bosses also cite “autopilot interference” when trying to depart the road; argue that counting on unsecured charging stations puts officers in danger when transporting suspects over long distances; and keep in mind that during a shooting, police are taught to cover behind the engine block of a automotive. In the case of electrical vehicles, this isn’t possible.
Technology
Full Anthropic CEO Goes Techno-Optimistic in 15,000-Word Paean to Artificial Intelligence
Anthropic CEO Dario Amodei wants you to know that he will not be “doomsday” for AI.
At least that is my understanding of “mic drop” at ~15,000 words essay Amodei posted on his blog late Friday night. (I attempted asking Anthropic’s chatbot Claude if he was OK, but unfortunately the post exceeded the free plan’s length limit.)
Amodei paints an image of a world where all the risks of artificial intelligence are mitigated and technology delivers previously unrealized prosperity, social improvement, and abundance. He says this is not intended to minimize the shortcomings of AI – Amodei is initially targeting, without naming names, AI corporations that over-sell and usually hype their technological capabilities. However, it may very well be argued that this essay leans an excessive amount of towards techno-utopia, making claims which might be simply not supported by facts.
Amodei believes that “powerful AI” will emerge as early as 2026. By “powerful AI” he means AI that’s “smarter than a Nobel Prize winner” in fields resembling biology and engineering and might perform tasks such like proving unsolved math theorems and writing “extremely good novels.” Amodei claims that this artificial intelligence will have the opportunity to control any software and hardware possible, including industrial machines, and can essentially do many of the work that humans do today – but higher.
“(This artificial intelligence) can engage in any activity, communication, or remote operation… including taking action on the internet, giving directions or giving directions to people, ordering materials, directing experiments, watching videos, creating videos, etc.” – writes Amodei. “It has no physical form (other than life on a computer screen), but can control existing physical tools, robots, or laboratory equipment through a computer; theoretically, he could even design robots or equipment that he could use.”
Lots would have to occur to get to this point.
Even today’s best artificial intelligence cannot “think” the way in which we understand it. Models don’t reason a lot as replicate patterns they observe in their training data.
Assuming for the sake of Amodea’s argument that the AI industry will soon “solve” human-like pondering, will robotics catch up to enable future AIs to conduct laboratory experiments, produce their very own tools, and so forth? The fragility of today’s robots suggests that is unlikely.
But Amodei is an optimist – very optimistic.
He believes that in the subsequent 7-12 years, artificial intelligence could help treat just about all infectious diseases, eliminate most cancers, treat genetic diseases and stop Alzheimer’s disease in its early stages. Amodei believes that in the subsequent 5-10 years, conditions resembling post-traumatic stress disorder (PTSD), depression, schizophrenia and addictions might be curable with AI-based drugs or genetically preventable through embryo screening ( and controversial opinion) — and that there can even be AI-developed drugs that “adjust cognitive function and emotional state” to “trick (our brains) to behave a little better and provide more satisfying daily experiences.”
If this happens, Amodei expects the common human lifespan to double to 150 years.
“My basic prediction is that AI-based biology and medicine will allow us to compress the progress that biologists would make over the next 50 to 100 years into 5 to 10 years,” he writes. “I’ll call it the ‘compressed twenty first century’: the concept if we develop powerful artificial intelligence, we’ll make as much progress in biology and medicine in just a few years as we’d in all the twenty first century.
This also seems far-fetched, provided that artificial intelligence has not yet radically modified medicine – and will not occur for a very long time, or never. Even if the AI does it reduce requires the work and expense of getting a drug into preclinical testing, it could fail at a later stage, similar to human-designed drugs. It is essential to consider that artificial intelligence currently used in healthcare has proven to be biased and dangerous in many respects, or otherwise extremely difficult to implement in existing clinical and laboratory settings. It seems, well, aspirational to suggest that each one of those and other problems might be solved inside the subsequent decade or so.
But Amodei doesn’t end there.
He claims that artificial intelligence can solve world hunger. This could reverse the tide of climate change. It could also transform the economies of most developing countries; Amodei believes that AI can increase sub-Saharan Africa’s GDP per capita ($1,701 in 2022) to China’s GDP per capita ($12,720 in 2022) inside 5-10 years.
These are daring statements, although probably familiar to anyone who has listened to the followers of the “Singularity” movement, which expects similar results. Amodei acknowledges that such a development would require “a massive effort in terms of global health, philanthropy and (and) political support,” which he believes will occur since it is in the world’s best economic interest.
This can be a dramatic change in human behavior, provided that humans have repeatedly shown that their primary goal is what is going to profit them in the short term. (Deforestation this is only one example amongst hundreds). It’s also price noting that most of the staff chargeable for labeling datasets used to train AI are paid well below minimum wage, while their employers reap tens of tens of millions – or lots of of tens of millions – of equity from the outcomes.
Amodei briefly touches on the specter of AI to civil society, proposing that the coalition of democracies secure the AI supply chain and block adversaries who intend to use AI for malicious purposes from accessing powerful technique of AI production (semiconductors, etc.). At the identical time, he suggests that AI, in the best hands, may be used to “challenge repressive governments” and even reduce bias in the legal system. (Historically, artificial intelligence heightened prejudices in the legal system).
“A truly mature and successful implementation of AI can reduce bias and be fairer for all,” writes Amodei.
So if artificial intelligence takes over every possible task and does it higher and faster, would not that put humans in a difficult position from an economic viewpoint? Amodei admits that it’s, and that at this point society would wish to have conversations about “how the economy should be organized.”
But it offers no solution.
“People really want a sense of accomplishment and even a sense of competition, and in a post-AI world it will be entirely possible to spend years attempting a very difficult task with a complex strategy, similar to what people do today, starting career research projects, trying to become Hollywood actors or start companies,” he writes. “The fact that (a) AI could in principle do this task better and (b) this task is no longer an economically rewarding element of the global economy does not seem to me to make much difference.”
In conclusion, Amodei puts forward the concept artificial intelligence is solely a technological accelerator – that humans naturally move towards “the rule of law, democracy and Enlightenment values.” But in doing so, it ignores most of the costs of AI. Artificial intelligence is predicted to have – and already has – a huge effect on the environment. And this causes inequality. Nobel Prize-winning economist Joseph Stiglitz and others excellent AI-driven workplace disruptions could further concentrate wealth in the hands of corporations and leave staff more powerless than ever.
These corporations include Anthropic, although Amodei is reluctant to admit it. After all, Anthropic is a business – one apparently price nearly $40 billion. And those that profit from AI technology are essentially corporations whose only responsibility is to increase profits for shareholders, not the betterment of humanity.
A cynic might actually query the timing of the essay, provided that Anthropic is reported to be in the means of raising billions of dollars in enterprise capital funding. OpenAI CEO Sam Altman published the same technopotimist manifesto shortly before OpenAI closed its $6.5 billion funding round. Perhaps it is a coincidence.
On the opposite hand, Amodei will not be a philanthropist. Like every CEO, he has a product to present. It just so happens that his product will “save the world” – and people who think otherwise risk being left behind. At least that is what he would have you think.
Technology
Khosla Ventures just backed OpenAI with $405 million more, but not necessarily with its own capital
Khosla Ventures has raised $405 million for OpenAI, in response to the documents.
Based on the filing itself, Khosla’s stake in ChatGPT maker is not less than 6% from the $6.6 billion round the corporate accomplished last week. That doesn’t suggest Khosla put significant, or any, capital into this round. Most, perhaps all, of the $405 million was raised from other investors through a special purpose vehicle, or special purpose vehicle.
Special purpose vehicles are used when an organization does not have enough capital to fill a round, or when it has sufficient exposure to the corporate and offers its allocation to others demanding shares.
Khosla Ventures declined to comment, so we do not know the terms of its participation in OpenAI’s latest round, which valued the corporate at $157 billion. Either way, OpenAI has been good for Khosla Ventures, which invested $50 million in the corporate in 2019, and in response to The Information 5% ownership. OpenAI’s valuation on the time is not publicly known, but it was likely much lower than the $29 billion valuation OpenAI reportedly achieved in 2023 when Microsoft invested $10 billion.
-
Press Release6 months ago
CEO of 360WiSE Launches Mentorship Program in Overtown Miami FL
-
Business and Finance4 months ago
The Importance of Owning Your Distribution Media Platform
-
Press Release6 months ago
U.S.-Africa Chamber of Commerce Appoints Robert Alexander of 360WiseMedia as Board Director
-
Business and Finance7 months ago
360Wise Media and McDonald’s NY Tri-State Owner Operators Celebrate Success of “Faces of Black History” Campaign with Over 2 Million Event Visits
-
Ben Crump7 months ago
The families of George Floyd and Daunte Wright hold an emotional press conference in Minneapolis
-
Ben Crump7 months ago
Henrietta’s family does not plan to sue pharmaceutical companies that claim to profit from her cancer cells
-
Theater7 months ago
Applications open for the 2020-2021 Soul Producing National Black Theater residency – Black Theater Matters
-
Ben Crump7 months ago
Henrietta Lacks’ family members reach an agreement after her cells undergo advanced medical tests