Hive wants to provide enterprise training in its fleet of GPUs as part of its pivot to artificial intelligence.
Bitcoin madenciliği firması Hive Blockchain (HIVE), Cuma günü analistlerle yaptığı bir kazanç görüşmesinde OpenAI’nin ChatGPT’si gibi rakiplerine kıyasla daha iyi mahremiyet öne sürerek müşterilerinin veri merkezlerinde büyük dilli AI modellerini eğitmesine izin vermeyi amaçlıyor.
“Şirketler artık hassas müşteri verilerini OpenAI gibi herkese açık bir LLM [geniş dil modeli] olan bir şirkete yüklemek istemediklerinin farkındalar. Hive Cloud aracılığıyla Hive’da sunmayı arzuladığımız şey, şirketlerin bir hizmete sahip olabileceği gizliliktir. madencilik firmasının CEO’su ve Başkanı Aydın Kılıç, “Anlaşmanın yürürlükte olması, verilerinin ve gizliliğinin mülkiyeti ve hala AI [yapay zeka] hesaplama iş yüklerini GPU bankamızda [grafik işleme birimleri] çalıştırıyor” dedi.
Hive’ın Nasdaq’taki hisseleri Cuma günü yaklaşık %2 değer kazandı.
Miners have been increasingly pivoting to AI as mining economics have interfered with their profitability, with some facing bankruptcies, while the AI sector sees a boom in interest from investors. However, it is yet to be determined whether miners can compete with big tech firms such as Google and Amazon Web Services who benefit from both economies of scale and decades of experience running high-quality customer-facing data centers.
Large language models understand and generate human language using probabilistic calculations. They are often trained on graphics processing units, a type of electronic circuit made up of semiconductors that was originally used for image processing, but has been found to be good at running AI loads.
Hive has a fleet of 38,000 GPUs from the days when it mined ethereum. Some of those it has directed to mining altcoins, while others are available to rent as a service or deployed in its cloud offering.
The firm expects a run-rate of $1 million annually for its GPUs, it said on the earnings call. Already “500 GPU cards generated $230,000 revenue this quarter,” said the firm’s Chairman, Frank Holmes, in a press release discussing the annual and quarterly earnings. Hive’s fiscal year ended on March 31, 2023.
Hive, 31 Mart 2023’te sona eren mali yıl için 106,3 milyon dolar gelir bildirdi ve 50,4 milyon dolarlık brüt işletme marjı veya gelirin %47’si. Bu, gelirin %78’i olan brüt işletme marjıyla 2022 mali yılında kaydettiği 211,2 milyon dolarlık tarihi gelirin yaklaşık yarısı.
Genel olarak 2023 mali yılında, firma 236,4 milyon dolarlık net zarar bildirdi ve bunun 81,7 milyon dolarlık amortisman, 70,4 milyon dolarlık ekipman değer düşüklüğü ve 27,3 milyon dolarlık mevduat değer düşüklüğü gibi birkaç gayri nakdi masrafı içerdiğini kaydetti. Buna karşılık, firma 2022 mali yılı için 79,6 milyon dolarlık net gelir bildirdi.
Relying too heavily on automation and AI is akin to commercial airlines simply using autopilot: there’s too much on the line to dismiss the importance of human intervention, Oz Rashid writes.
Since recruiting methods are increasingly relying on artificial intelligence (AI), the principles used to identify qualified individuals have come into question.
While these machines seem like a worthy replacement to subjective hiring managers, they can perpetuate historic company and algorithmic bias, discriminate against an applicant’s gender or age, and erode the laws in our democracies.
Therefore, the world’s workforce shouldn’t be selected through AI alone.
Trained professionals must be knowledgeable about AI recruiting methods and implement sufficient safeguards to keep programs ethical and the world safe.
Cases showing AI bias already exist
While AI has the potential to improve many aspects of the workplace, it also has the capability to undermine acceptable hiring practices and exacerbate wealth inequality.
A 2022 Cambridge University study revealed how voice and phrenology analyses are unreliable, biased methods for identifying ideal applicant traits.
Phrenology, a pseudoscience that claims skull patterns are linked to particular human characteristics, has been heavily contested, and voice analyses can be regarded as another underdeveloped practice not worthy of determining who deserves a job.
Since AI hiring tools can unintentionally discriminate against others, they can also be applied consciously as a way to manipulate a candidate pool.
A case from 2018 showed that while Amazon’s recruitment tool was designed to solicit applicants of all genders, the hiring data used to train the tool was male-dominant.
The software can easily be hardwired to reject applicants with disabilities, a specific racial background, or based on physical profiles alone.
For instance, a case from 2018 showed that while Amazon’s recruitment tool was designed to solicit applicants of all genders, the hiring data used to train the tool was male-dominant.
This caused the machine to interpret women as incompatible candidates and reject their applications.
Although this stemmed from biased data alone, this detailed analysis could screen applicants on information they didn’t even know they were sharing.
In 2019, the company HireVue released software that ranked job applicants’ employability based on their facial movements, word choice, and how they spoke.
Algorithms can have biases, too
Prior to AI recruitment tools, Applicant Tracking Systems (ATSs) have been popular applications used since the 90s.
ATSs are helpful for sourcing, filtering, and analysing candidates throughout the entire recruiting and hiring process, but they can exacerbate workplace bias, and many have become outdated.
Therefore, replacing old ATSs with modernised tools is smart as long as experienced professionals are still involved.
All humans have an unconscious bias to prefer things familiar to them, which can directly skew the hiring data a machine learns from.
Too often, hiring managers and other business leaders think that AI will replace the jobs of talented HR teams because it’s less biased and more efficient.
In reality, even the most carefully programmed AI will have algorithmic biases and can make disconcerting decisions.
All humans have an unconscious bias to prefer things familiar to them, which can directly skew the hiring data a machine learns from.
Managing AI use in recruiting is crucial for protecting the rights of every citizen and ensuring that society has access to opportunities that keep them and their families alive.
This kind of regulation can only be done through proper safeguards and monitoring.
We shouldn’t rely on autopilot too much
From the moment candidates are sourced, machine learning and predictive algorithms choose which individuals should see a job advertisement and which should be accepted.
A study from Northeastern University in Boston and USC reported that Facebook advertised 85% of cashier jobs to women and showcased a taxi company’s vacancies to audiences that were 75% black.
This situation most likely resulted from an AI’s independent evaluation of what applicants each company preferred.
Relying too heavily on automation and AI is akin to commercial airlines simply using autopilot: there’s too much on the line to dismiss the importance of human intervention.
While that behaviour isn’t inherently malicious, it’s in violation of US Equal Employment Opportunity laws.
Unfortunately, 79% of organisations used a combination of automation and AI for hiring in 2022, and the majority are completely unaware that their system is producing biased outcomes.
Relying too heavily on automation and AI is akin to commercial airlines simply using autopilot: there’s too much on the line to dismiss the importance of human intervention.
Future-proofing efforts are everyone’s responsibility
In the past, organisations may have been shielded from being held accountable for their biases.
However, new legislation like New York’s Local Law 144 guarantees transparency and accountability in AI hiring.
The European Union’s upcoming AI Act is somewhat more opaque as to what type of remedies will be in place to prevent AI’s hiring bias, but the intention to create protections is there.
Meanwhile, organisations should already be educating themselves on which protocols protect them and their applicants.
Meanwhile, organisations should already be educating themselves on which protocols protect them and their applicants.
Minimising the risk of exacerbating discrimination in the recruitment and hiring process can be achieved through the implementation of safeguards such as auditing systems designed to detect bias and conducting thorough reviews of the data sets used by AI learning.
Vendors should also be able to provide transparency within their algorithms, including how they were trained, what data was used, and what assumptions were made.
This information should be accompanied by clear explanations of their efforts to mitigate bias and verified compliance to conduct ongoing testing that will catch future biases.
AI can still improve existing hiring practices if incorporated safely
Permitting AI to fully control an entire company’s hiring processes should never be acceptable.
This allowance disengages employers from the people they need, and even worse, it could perpetuate discriminatory hiring decisions.
To counter this, we must learn how to incorporate AI safely and effectively into our existing hiring practices. This will benefit the long-term health of our organisations, the workforce that supports them, and global economies.
AI is a marvellous tool with a unique value proposition, but there is a balance that we need to strike to produce transformational results and keep humankind in control of its future.
_Oz Rashid is the CEO and founder of MSH, a global tech and talent solutions company. _
_At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation. _
Bitcoin ve altcoin piyasasında son dönem yapay zeka coinleri dikkatleri üzerine çekmeye başladı. Bir çok yapay zeka koini var.
Bitcoin ve altcoin piyasasında son dönem yapay zeka coinleri dikkatleri üzerine çekmeye başladı. Bu bağlamda bir çok yapay zeka odaklı kripto para birimi piyasada yer alıyor. Bu noktada Bitwise CEO’sundan önemli açıklamalar gelmiş durumda.
Altcoin piyasasında yeni bir hareketlenme
Bitwise CEO’su Hunter Horsley, yakın tarihli bir tweet’te görüşlerini paylaşıyor. Horsley, yapay zekanın teknoloji meraklılarının ve sektör uzmanlarının dikkatini çektiğini söylüyor. Benzer şekilde yapay zekanın (AI) altcoin dünyası için dönüştürücü gücüne olan inancını paylaşıyor. Horsley, AI ve blockchain teknolojisinin çeşitli sektörlerde kesişip devrim yaratabileceği iki önemli boyutun altını çizdi.
Horsley’nin ilk odak noktası orijinallik. Buna göre, içerik yaratma patlamasıyla karakterize edilen dijital bir manzarada orijinalliği doğrulama zorluğu var. AI teknolojisi gelişmeye devam ettikçe, çok miktarda görüntü, ses ve yazılım üretecek. Ancak bu içerik artışı, gerçek olanla olmayanı ayırt etme sorununu da beraberinde getirecek. Sürekli artan sahtekarlık, deepfake ve intihal tehdidine karşı savunma çok önemli hale geliyor.
Sorunun üstesinden nasıl geleceğiz?
Horsley, bu sorunu çözecek çözümü de veriyor. Ona göre, genel/özel anahtar kriptografisi ve sıfır bilgi kanıtları gibi kriptografik araçlar önemli. Ayrıca altcoin dünyasının temeli blok zinciri teknolojisinin ehemmiyetli bir rol oynayabileceğini öne sürüyor. Bu araçları üstü kapalı bir güvenle ölçeklendirip dağıtarak blok zincirleri, yaratılanların, yaratıcıların ve bireylerin gerçekliğini doğrulamak için güvenilir bir platform görevi görebilir. Neticede yanlış bilgi ve dolandırıcılıkla ilişkili riskleri azaltabilir.
Horsley tarafından vurgulanan ikinci boyut yeni ortaya çıkan ihtiyaçlarla ilgili. Buna göre, insan müdahalesi olmadan işlem yapabilen, dengeleri tutabilen ve bilgi depolayabilen otonom yapay zeka aracılarına yönelik ortaya çıkan ihtiyaca odaklanıyor. Diğer taraftan yapay zeka gelişiyor. Yapay zeka aracıları görevleri ve projeleri bağımsız olarak yürütme konusunda daha yetenekli hale geliyor. Hassas bilgileri işleme ve finansal işlemleri güvenli bir şekilde gerçekleştirme becerileriyle ilgili sorular ortaya çıkıyor.
Bu noktada Horsley önemli sorular soruyor. AI temsilcilerine bütçeleri yönetme ve sözleşme imzalama konusunda nasıl güveneceğiz? Birbirleriyle nasıl güvenli işlemler yabpcaklar ve varlıkları emanet edecekler? Horsley, bu zorlukların üstesinden gelmek için çözümlerin önemli olacağını söylüyor. Ayrıca blockchain teknolojisinin, stablecoin’lerin, akıllı sözleşmelerin önemine dikkat çekiyor. Merkezi olmayan finansın (DeFi) ve henüz geliştirilmemiş diğer çözümlerin değerli hale geleceğini savunuyor. Altcoin dünyası ile ilgili bu yenilikler, yapay zeka aracılarının otonom bir şekilde etkileşim kurmasını sağlayacak. Ayrıca işlem yapması için güvenli ve şeffaf bir çerçeve ortaya koyacak. Neticede makineden makineye değer alışverişi için yeni imkanlar sağlayacak.
Benzer düşünceler söz konusu
Hunter Horsley’nin tweet’iyle başlayan sohbet koyulaştı. Circle CEO’su Jeremy Allaire’in tamamen aynı fikirde olduğunu ifade etmesiyle hızla ivme kazandı. Allaire, AI ve blockchain teknolojisinin doğası gereği uyumlu olduğunu söylüyor. Bu bağlamda daha verimli ve güvenli bir dijital ekosistem oluşturmak için birbirlerinin güçlü yönlerinden yararlanabileceği fikrini sunuyor. Verilerin kaynağı, makine tarafından üretilen ve uygulanan sözleşmeler ve makineler arası değer değişimi, yapay zeka ve blok zincirinin yakınsamasının önemli etkiye sahip olabileceği alanlar olarak vurgulanıyor. Halihazırda, çeşitli amaçlar için zincir üstü cüzdanlar kullanan ve USDC gibi stablecoin’ler kullanan AI botlarına dair raporlar var. Bu örnekler, gerçek dünya uygulamalarının ortaya çıkmaya başlamasıyla yapay zeka ve kripto para birimleri arasındaki artan sinerjiyi örnekliyor.
Kriptokoin.com olarak değindiğimiz üzere kripto para dünyasında bir çok yapay zeka odaklı kripto para birimleri bulunuyor. Buna göre ilk 5 içerisinde altcoin The Graph, Render Token, Injective, SingularityNet ve Oasis yer alıyor. Sonraki sıralamada ise Ocean Protocol, Fetch.ai, iExec RLC, Numeraire ve Insure DeFi yer alıyor.
Artificial intelligence (AI) tools are changing the way we work – especially the media. Here are the rules of the road for CoinDesk.
New tools driven by artificial intelligence (AI) have been grabbing headlines over the past several months. The basic gist of these tools is that in response to specific prompts, they can “create” content (whether text, imagery or something else) much faster than a human ever could. Once they’ve been “trained” on extensive datasets, these tools can essentially predict what a user wants, often with stunning accuracy.
With the right set of queries, chatbots such as ChatGPT can write entire articles about specific topics in mere seconds. AI-driven image generators can instantaneously produce illustrations to represent abstract topics. Still other tools can synthesize video and audio content from the “raw material” of text and images.
This obviously has massive implications for creative fields, and in particular media organizations like CoinDesk. We’ve been researching AI tools for the past few months, while simultaneously observing how other media companies have been using AI. We want to empower our staff to take advantage of these tools to work more effectively and efficiently, but with a process that safeguards our readers from the well-documented problems that can arise with AI content – as well as the rights of the original content creators on which the generative content is based.
There are several use cases for AI in the process of creating content. This article deals with the main ones that are relevant to CoinDesk’s content team. It does not cover every use case, and does not speak to workflow outside of the process of content generation.
Generative text in articles
Current AI chatbots can create text from queries very quickly. Users can also customize the text with adjustments to the query — complexity, style, and overall length can all be specified.
However, an AI cannot contact sources or triage fast-breaking information reliably. While it performs some tasks extremely well, AI lacks the experience, judgment and capabilities of a trained journalist.
AI also makes mistakes, sometimes serious ones. Generative tools have been known to “hallucinate” incorrect facts and state them with confidence that they’re correct. They have occasionally been caught plagiarizing entire passages from source material. And even when the generated text is both original and factually correct, it can still feel bland or soulless.
At the same time, an AI can synthesize, summarize and format information about a subject far faster than a human ever could. AI can almost instantaneously create detailed writing on a specific subject that can then be fact-checked and edited. This has the potential to be particularly useful for explanatory content.
Given its limitations and the potential pitfalls, the writing of an AI should be seen as an early draft from an inexperienced writer. In more illustrative terms, an AI tool is comparable to an intern who can write really fast. The analogy is apt: Typically, interns need a great deal of supervision in their work. They are often unfamiliar with the area they’re writing about and the audience they’re writing for, occasionally leading to serious errors. The editor assigned to their work needs to edit their work carefully, check the underlying facts and help tailor the article to the audience.
However, with the right editing process, the work of an intern can be made publishable relatively quickly, especially if the intern has command of the English language (something AI excels at). Similarly, with the right safeguards in place that both prioritize a robust editing process and target the specific pitfalls of AI, we believe that sometimes using generative text in articles can help writers and editors publish more information faster than a purely human-driven process.
With that in mind, CoinDesk will allow generative text to be used in some articles, subject to the following rules. The generative text must be:
Given the requirements and the inherent limitations of AI with respect to the primary ingredients of journalism (e.g., talking to sources), the number of use cases for generative text are few. However, we see an opportunity for AI to assist in explanatory content, such as in this article here. In every case where generative text is used in the body of an article – whether in whole or in part – the AI’s contribution will be clear through both a disclosure at the bottom of the article and the AI’s byline: CoinDesk Bot.
Generative images
CoinDesk will immediately discontinue the use of generative images in content. This is due to pending litigation around the use of proprietary imagery as “training” for various AI-driven image generators. We might make an exception in the case when the point of the article is to discuss generative images and the images are used in a way that constitutes fair use, but these would be on a case-by-case basis.
Using a generative image tool to help “inspire” a work of art created by a human is generally OK (this is akin to doodling on scrap paper) with the caveat that the human-created image should not be a de facto copy of the AI-generated image.
Generative voices
AI tools can generate or use human-sounding voices to read copy, effectively turning articles into audio clips or podcasts. Though CoinDesk doesn’t currently employ these tools, we see the practice as an evolution of tools that already exist for the visually impaired. If possible, the use of an AI voice generator will be disclosed in the accompanying show notes.
Social copy
Social copy typically functions as a short summary of an article, crafted for a specific platform. Because of its short length, social copy is relatively easy to fact-check and edit, and some AI text tools may be adept at crafting text in the style of specific platforms. In addition, there is less expectation among social audiences that the text accompanying a linked story is original.
For these reasons, CoinDesk allows AI-generated social copy as long as the person preparing the post edits and fact-checks the copy (which is standard), and for the same reasons we don’t think disclosure is necessary (and would lead to some very clunky tweets). As with use in articles, using generative images in social posts is forbidden.
Headlines
Like social copy, headlines are quickly fact-checked and edited. Because editors will always be directing the process, we view AI-written headlines as suggestions and are thus allowed. Disclosure isn’t necessary because this process does not add any new information, and editors will always check the headlines for accuracy and style. This also applies to subheadings and short descriptions.
Assistance with research
AI may sometimes be able to assist in summarizing long documents such as court filings, research papers and press releases, among others. As long as no part of the text generated is copied to a published article, this is generally allowed with no disclosure needed, with two important caveats:
Recommended for you:
Diana Biggs: Building Early-Stage Ventures in Web3
Crypto Derivative Volumes Saw Speedy Growth as Prices Rose in January
El Salvador Pays Back $800M Maturing Bond, President Nayib Bukele Says
Join the Most Important Conversation in Crypto and Web3 in Austin, Texas April 26-28
AI-generated story ideas
Any ideas generated by an AI will inherently need to be vetted and researched by the reporter or editor, so this is allowed. Unless actual text generated by the AI ends up in the final article, it’s not required to disclose that the idea was originally suggested via AI (although the author still may want to do so).
The future
These are the rules of the road for CoinDesk as we travel forward into an AI-driven future. That road may change direction suddenly, expand to a multi-lane divided highway or perhaps even come to a dead end, so we expect these rules to evolve in the coming months and years. Regardless, we’re determined to tread into this new frontier, but to tread carefully. We want these rules to empower our content team to work smarter, using AI for the very specific tasks that machines are best at, so humans can focus on what they’re best at: journalism.