The Nvidia RTX 4070 could be a powerful GPU — but it might not be for some time

NVIDIA RTX 4070 It’s in the pipeline – we know that, of course – but it may not be around for a good while yet, and there are doubts about how powerful the graphics card will be too. It could be a beast of a mid-range GPU, mind (more on that later).

This is the latest from Moore’s Law is Dead (MLID) hardware leak on YouTube, which the latest video (Opens in a new tab) It discusses a whole host of aspects related to the RTX 4070. Keep the salt pan handy as always with a Grapevine GPU.

One of the main points made here is not to expect this graphics card any time soon. MLID believes that the likely timeframe for launch is the end of the first quarter, ie March, so it will be five months away.

The theory is that Nvidia still has plenty of current-generation RTX 3000 stock to change, so to ensure Ampere GPUs sell through it, Team Green allegedly will throttle the amount of RTX 4090 and 4080 boards that hit shelves this year ahead (and we’re entering 2023). ).

A source told MLID that this issue should be resolved by the end of the first quarter, and then that is the likely launch date for the RTX 4070, which will only appear when Ampere’s stock is sufficiently cleared. Alarmingly, the end of Q1 is the best-case scenario the rumorers believe, so we could see a longer wait for this (relatively) at a reasonable price. Lovelace GPU.

The slight caveat here is that a “paper launch” could come a little earlier – perhaps even towards the end of 2022 – but that could just be a revelation and maybe some lower initial stock. So, we’ll be in that familiar situation where the GPU launches, sells out right away – no doubt speculators are involved – and it’s still a game waiting for a true-volume product release.

Another key tip from this latest video is that the RTX 4070 still looks pretty high on specs. What obviously won’t happen is what some have suggested elsewhere: that Nvidia might simply take the RTX 4080 12GB and push that into the 4070.

Mainly because the business hasn’t really started making 12GB RTX 4080 cards, or not much has been done anyway, before being canceled. Remember, this was an add-on partner product only – which means that Nvidia wasn’t making its own Founders Edition. And it’s all partly because Nvidia scrapped this card – while it was relatively easy to do, after the GPU backlash it was seen as a RTX 4070 disguised as a lower-level RTX 4080 from order to put the highest price on it.

Because of this perception, that’s exactly why Nvidia can’t simply churn out an RTX 4070 with the same specs as the 4080 12GB – because it would be very clear that this was actually a 4070 in the first place, shred Many (including us) my formerly.

The upshot is that the 4070’s specification would definitely need to be different from the “non-absolute” 4080, even if it were to use the same chip (AD104).

(Image credit: Future)

Specifications are important

Now, as to what exactly those specs might be, MLID believes Nvidia doesn’t define itself yet, and will depend on how powerful the RDNA 3 graphics cards are (the latest rumor is RX 7000 GPUs Could Be More Powerful Than Many Thinkby the way).

MLID also assumes that we can see a weak sauce of the RTX 4070, which will be a 10GB model with a shortened amount of CUDA cores (not the full 7,680, but maybe 7,168, since that’s the base number thrown in the past for 4,070 on the vine ). This would be the path Nvidia might want to take if RDNA 3 doesn’t look too hot – either from a performance point of view, or moreover, because of the price/performance ratio or the overall value proposition – and in fact if sales of Ampere go faster and better than expected.

Another path that MLID expects is for Nvidia to make an RTX 4070 which is a 12GB card and roughly the same as the canceled 4080 card, but reduce the CUDA cores a bit (maybe only to 7424 for example). This would allow Team Green to argue that this isn’t just 4,080 12 GB relocated, and carries that perception through, even though we all know what really happened behind the scenes, let’s face it. This would be the way then AMD RDNA 3 It comes out reasonably strong at launch, in theory, and Nvidia feels the need to be more competitive.

Another option that MLID mentions is whether RDNA 3 exceeds expectations for performance and value (according to this latest rumor). In this case, Nvidia might want the RTX 4070 Ti which actually beats the 4080 12GB (perhaps with faster VRAM, he suggests).

The leaker says this is the least likely possibility, and in fact, we can’t see that happening. (“Okay, this is an RTX 4070 variant which is actually faster than the lower tier model 4080 we revealed – and then it was abandoned not too long ago” – that doesn’t really make sense. Although you could say that Nvidia didn’t make sense with some moves In the past, too; but that’s another story).


Analysis: Patience is a virtue

What do we do with all that? Well, the general gist is that Nvidia is still making a big decision about the placement of the RTX 4070, and may not make their decision until after pushing RDNA 3 GPUs out. So this necessity delays the RTX 4070, which still needs to make way for RTX 3000 models to sell through, and so we shouldn’t expect the 4070 for quite some time.

In short, as much as we can Want RTX 4070 (and already 4060) To get out soon, in theory, we have a long wait – perhaps until March 2023, or even later. The wait can be made even more painful because based on some MLID images it also delivers the RTX 4070 – alleged images, add extra salt here, naturally – it seems to have a big fan and smooth airflow, with rumors in testing, the graphics card remains pretty good .

We could totally look at the GPU, then, if Nvidia decides to take the more powerful RTX 4070 route as discussed above — a good, relatively high-performance graphics card, which gamers will fend for a bit, of course. Except we’ll have to be patient for the GPU to show up, maybe in fact maybe very patient…

Why EU AI law could hurt innovation



Will cracking down on open source AI development actually hurt the single market?

The European Union’s proposed AI law plans to restrict open source AI. But this will come at the cost of progress and innovation, says Nitish Motha of Genie AI

The proposed Artificial Intelligence Act (AIA) – which is still under discussion – from the European Union touches on the regulation of open source AI. But imposing strict limits on the sharing and distribution of open-source general-purpose artificial intelligence (GPAI) is a completely regressive move. It’s like taking the world back 30 years.

Open source culture is the only reason why humanity has been able to develop technology at this speed of light. AI researchers have only recently been able to embrace sharing their code for more transparency and verification, but placing restrictions on this movement would harm the cultural progress made by the scientific community.

Regulations are good and should be welcomed, but not at the expense of creativity and scientific progress.

It takes a lot of energy and effort to bring about a cultural shift in society – so it would be sad and frustrating to turn this around. The entire AI law needs to be considered very carefully, and the proposed changes to it have generated ripples across the AI ​​and open source technology community.

reaction “chilling effect”

counter targets

Two goals of the verb Proposed regulatory framework Especially stand out:

  • Ensuring legal certainty to facilitate investment and innovation in artificial intelligence And the
  • Facilitate the development of a single market for legitimate, safe and trustworthy AI applications and prevent market fragmentation’

The introduction of regulations on GPAI appears to contradict this data. GPAI thrives on innovation and knowledge sharing without fear of repercussions and legal costs. So, instead of creating a secure market that tolerates fragmentation, what could actually happen is a set of strict legal regulations that prevent open source development and increase the monopoly on AI development with big tech companies.

This is likely to create a market that is less open and, therefore, a market in which it is difficult to gauge whether AI applications are “legal, secure, and trustworthy.” Of course, all of this is counterproductive for GPAI. Instead, the variance that such hypotheses can generate will place more power in the hands of the monopolists, and this is a matter of growing and troubling concern.

But… do we need regulations?

It is also important to acknowledge those who may see the backlash to the changes as an attempt by companies to rid themselves of regulations. Certainly regulations are needed to prevent serious misconduct. Without regulations, wouldn’t AI fall into the wrong hands?

It’s a valid concern, and yes, of course, we need regulations (shown below). But this regulation should be created on the basis of the application, not as a broad brush stroke of all models. Each model should be evaluated based on whether it is considered malicious and regulated accordingly, rather than targeting open source at its source and thus limiting creativity.

This is a complex, complex and multifaceted work to be carried out. Even those who agree on the whole still disagree in some areas. But the main sticking point is that the public nature of GPAI allows people to access it. This open, collaborative approach is the fundamental reason for making progress, creating transparency, and developing technology for the benefit of society, collectively and individually, on business gains.

freedom to share

Open source licenses such as Massachusetts Institute of Technology They are designed to exchange knowledge and ideas, not to sell finished and tested products. Hence, they should not be treated in the same way. It is true that there is a need for the right balance between regulations. This is in particular to improve the reliability and transparency of how these AI models are built, what types of data were used to train them and whether there are any known limitations – but this cannot be at the cost of risking the freedom to share knowledge.

Currently, the AI ​​law appears to be targeting creators to openly share knowledge and ideas. The list design should be tailored for people who use open source software to be more careful and do their research and experiments before releasing it to a wide audience. This can expose bad actors who want to use creators’ work in commercial projects without investing in any additional research and quality controls on their part.

The final developer should in fact be responsible and accountable for scrutinizing and performing comprehensive quality checks before serving users. These are the people who will ultimately benefit commercially from open source projects. But, in its current form, the framework clearly does not intend to do so. The main motto of open source is to share knowledge and experience without No commercial gain.

Openly organize to openly innovate

Adding strict legal responsibilities for open source GPAI developers and researchers will only limit technical growth and innovation. It will discourage developers from sharing their ideas and learning, preventing new startups or ambitious individuals from accessing the latest technology. It will deprive them of learning and being inspired by what others have learned and built themselves.

This is not the way technology and engineering work in the modern world. Sharing and building above others is at the heart of how technology products and services are developed – and this must be maintained. Regulations are good and should be welcomed, but not at the expense of creativity and scientific progress – rather, they should be applied on the application front to ensure responsible outcomes. In the face of the changes in AIA, one thing is clear – the open source culture must be cherished.

Nitish Motha is co-founder and CTO of Jenny AI

Related:

How AI can change the rules of the game regarding data privacyAI offers multiple benefits to businesses, but it also poses data privacy risks

Most Valuable Use Cases for AI in Web ApplicationsThis article will explore how AI in web applications has helped organizations increase value

Can AI detect spam faster than humans? AI can already detect spam faster than humans, but there are limits, says Martin Wilhelm of GMX

Source

Oscar Isaac reveals what it takes to make Moon Night Season 2

Moon Knight lead actor Oscar Isaacs detailed what it takes to get through the second season of the Marvel TV series.

Talking to comicbook.com (Opens in a new tab)Isaac revealed that the only way to Disney Plus Offers Will return if there is a captivating story worth telling. in our area Moon Night reviewWe called it “the best TV series Marvel has ever made,” so we’re definitely interested in the character’s return.

Interestingly, Isaac confirmed that he had talks with him marvel Bringing Moon Knight back to the small screen – or even the big screen is exciting. However, the star Wars And the Sand dunes The star also played down the possibility of the Stephen Grant/Mark Spector/Moon Knight trio appearing in the MCU again in the near future.

“There were some specific conversations [about Moon Knight]Isaac’s explanation. They were nice. Spilling the details is that there are no details. we do not know [if there’ll be a season 2]but we’re talking about it.

“Honestly, it’s about the story,” he continued. “Is there a story worth telling? Is it fun? Will I be embarrassed about it when it gets published? So it comes down to, is there something worth pouring all you have into it. And with Moon Knight, that was so much about it that it creates a structure so that it’s all Morning when the alarm went off, I couldn’t wait to get to the set because I wanted to try something different.”

Yes, we’ll be taking another season (or movie) starring Moon Knight. (Image credit: Marvel Studios/Disney Plus)

It’s clear that Isaac is eager to wear the famous Moon Knight suit again – we don’t know what that will be like. The most obvious thing Marvel should do is green light a second season. But, as mentioned, there is also the possibility of a solo Moon Knight movie or even an Avengers-style team superhero movies. After all, Moon Knight deals with the supernatural aspect of the MCU, so he allied himself with other notable characters, such as Dr. Gharib And Blade, they’re not off the table. The Midnight Sons movie team, anyone?

For his part, Isaac is open to the idea. He said this before Moon Knight arrived Disney Plusas well as Moon Knight Executive Producer Grant Curtis He previously told TechRadar that the character can “appear” anywhere via MCU. Seven months after Moon Knight’s live debut, Isaac’s stance on the Midnight Sons crossover hasn’t changed, although he doubled down on his ‘story first’ approach when asked if this cast would happen.

“Whether it’s on set or maybe a great idea coming up for a second season or if it’s an independent movie or whatever that could be,” he added. “I think it’s just that kind of way. It’s the story first.”


Analysis: Moon Knight’s MCU return is inevitable

Moon Knight stars on camera as he prepares for battle in Moon Knight Episode 6

Moon Knight will be back – we don’t know when. (Image credit: Marvel Studios/Disney Plus)

Complete spoilers follow the first season of Moon Knight. Proceed at your own risk.

While Isaac is keen to keep expectations about Moon Knight’s return in check, it seems an inevitable outcome – at this point – that he will.

openness Moon Night season finale suggest there Abundance More to explore from superhero-influenced Dissociative Identity Disorder (DID). After six episodes of excitement, fans finally meet Steven and Mark’s other, more lethal person – Jake Lockley – in MCU Phase 4 project. Given Jake’s fondness for violence, it will be very interesting to see how the three Moon Knight characters work together (or against each other) in another Moon Knight production.

Moon Knight’s on-and-off relationship with Koncho is ripe for further exploration as well. We didn’t see much of it in season one so we dived into this link/ranking/call it what you’re going to do for a great watch.

Then there’s Marvel’s desire to change gears and examine the more mystical side of the MCU. The likes of Doctor Strange, Moon Knight and even eternity This process began, while werewolf at night Keep pulling the curtain back on the supernatural elements of the MCU. Add in the future MARVEL STAGE 5 productions, including Blade and Ironheart, that will explore other supernatural and magic-based areas of Marvel Cinematic Juggernaut, and the studio is sure to illustrate the MCU’s dark and mysterious limits.

Regardless of whether Moon Knight gets a second season or collaborates with Blade and the company on the live-action movie Midnight Sons, the character’s MCU future looks bright. in May, a faux pas by the Marvel Social Media Team It was suggested that the second season was a formality. Although this bug was quickly corrected, it does indicate that Moon Knight’s MCU’s future was secure.

Of course, a lot will depend on where Marvel can fit another Moon Knight project on its packaged roster. The TV studio’s stage five schedule is already stacked up, so Moon Knight may not be able to return until stage 6. Meanwhile, Isaac’s long-term desire to play Moon Knight may present a potential problem as well. Given how much he enjoyed the first time experience, you can bet he’ll be back for more.

Want to read more Marvel-based content? Verify Marvel movies in order Instructs. Instead, find out It was previously rumored that the villain WandaVision is set to appear in Ironheartor learn more about Black Panther: Wakanda Forever ahead of its release on November 11.

AI-controlled robotic laser can target and kill cockroaches

A laser controlled by two cameras and a microcomputer powered by an AI model can be trained to target specific types of insects


technology



October 20 2022


Laser device to kill cockroaches

Eldar Rachmotulin

Researchers have created a device that uses machine vision to detect cockroaches and stun them with a laser. They say this method could provide a cheaper and more environmentally friendly alternative to pesticides.

Eldar Rachmotulin At Heriot-Watt University in Edinburgh, UK, he and his colleagues outfitted a two-camera laser and a microcomputer that worked with an artificial intelligence model that could be trained to target specific types of insects.

Rachmotulin says the team chose to conduct experiments with crickets because their resilience is a rigorous test: “If you…

.

Source

Google Maps lock screen widgets in iOS 16 are changing the way I take the road

Steve Jobs may have invented the three-click rule. This rule refers to how the late CEO and co-founder of Apple pushed the original version iPod A team to ensure that the user is always three clicks away from playing a song.

I thought about this when I was creating a file lock screen When I travel in the car, and how the platform news support for lock screen widgets reduced the number of clicks on Iphone To launch an app or action without going to the Home screen.

I use both Apple Maps and Google Maps For different purposes – whether using Apple app To search for specific landmarks when my wife and I are away for the weekend, or we use Google Maps to get from one destination to another.

However, it is a combination of Google Maps and iOS 16The lock screen widgets have been the most useful to me when driving somewhere, and I hope to have more by the time iOS 17 arrives.

(Image credit: TechRadar)

When we’re about to leave for a car trip, I switch the lock screen on my iPhone to the one above, the Google Maps widget on the right will let me choose the destination I’ve visited previously, and turning on the weather widget on the left will give me an idea of ​​the weather for the next few hours while I was on the road.

This also launched the second part of setting up my iPhone to drive. shred spotify Built in Google Maps, it can also play a song if you need it, but since we love listening to a bunch of podcasts on the road, I use Apple’s Shortcuts function to automate my podcast playlist.

Command Do Not Disturb in iOS, take the shortcut

(Image credit: TechRadar)

Once you enable “Don’t bother drivingautomation starts playing my podcast playlist in . format cloudySummarizing our last stop. It’s incredibly easy to set up my iPhone to drive now, and prior to iOS 16, I spent most of my time setting up podcasts before hitting the road.

These lock screen widgets aren’t just for show – they can help drive the number of clicks to or below the Steve Jobs standard set more than twenty years ago with the iPod. Combine that with shortcut automation and it really does save time for me and many others for sure.

When it comes to iOS 17, I like to put more widgets on the lock screen – one above the time and three below is not enough, I like at least six widgets.

For now, however, the widgets and automation It’s a perfect combination for me when driving, before and while driving, which is why I consider iOS 16 a great update for my iPhone.

Temporary job workers in Bangladesh benefit from launch of AI microfinance platform

<!–

–>

Written by Lindra Montero

today

  • AGAM International
  • Bangladeshi
  • digital lending

Entrepreneurs and “gig” workers in Bangladesh, many of whom are women, are able to access loans for the first time through a landmark agreement between SBK Foundation and UK FinTech AGAM International.

This announcement between the first digital-only microfinance provider in Bangladesh, SBK Foundation, and UK FinTech AGAM International, will enable entrepreneurs in Bangladesh to access the credit they need to purchase goods to enable them to do their jobs.

Shabnam Wazeed, CEO and founder of AGAM International He said, “We are changing the face of finance, making finance accessible to all. As a foundation, I am particularly proud that AGAM has joined forces with SBK to enable it to provide financial products to large numbers of entrepreneurs and workers in the ‘gig’ economy, most of whom are women, enabling It gives them dignity to be able to apply. For a transfer loan.”

This new offering will focus on “gig” economy riders and makeup salespeople to purchase smartphones, bikes and product samples. The average loan is expected to be between 25,000 and 50,000 BDT.

Under the partnership, SBK is able to rely on AGAM’s credit scoring platform to enable unbanked people to access financing, even when they lack a traditional credit history. AGAM data provides SBK with confidence to identify potential borrowers.

Two of the first companies whose workers can get loans from SBK and AGAM International are Shajgoj and Food Panda.

SBK Foundation, Sonia Bashir Kabir He said, “We are delighted to be able to connect with AGAM International to work together to enable workers to access the credit they need through technology. It is great to see two women-led organizations from Bangladesh working together to make a difference for individuals and communities.”

Previous article

Illimity Bank partners with Nexi to help Italian SMEs

Read more

next article

Paycell is looking to expand into Europe; Opens an office in Germany

Read more

IBSi Daily News Analysis

FinTech, UK, Inflation

October 19, 2022

AGAM International

Can fintech tighten its grip on finance in times of inflation?

Read more

<!–

–>

IBSi FinTech مجلة Magazine

  • The Most Trusted FinTech Magazine Since 1991
  • Digital Monthly Edition
  • Over 60 pages of research, analysis, interviews, opinions and ratings
  • global coverage

subscribe now

Source

Photoshop’s answer to the Dall-E hints at the future of photo editing

these years Adobe Max 2022 They got big on 3D design and mixed reality headsets, but the AI-generated elephant in the room was the emergence of text-to-image generators like Dall-E. How does Adobe plan to respond to these revolutionary tools? Slowly and cautiously, according to the main speech – but an important feature buried in the new version of Photo shop It shows that the process has already started.

near the end Release Notes (Opens in a new tab) For the latest version of Photoshop v24.0, it is an experimental feature called “Neural Background Filter”. What does this do? Like Dall-E and Midjourney, it lets you “create a unique wallpaper based on the description”. Simply write in the background, select “Create” and choose your preferred result.

This is far from being an Adobe Dall-E competitor. It’s only available in Photoshop Beta, which is a separate test bed from the main app, and you’re currently limited to typing in color to produce different image backgrounds, rather than weird configurations from the darkest corners of your imagination.

But the ‘neural background filter’ is clear evidence that Adobe, while cautious, is dipping its toes further into AI image generation. And her keynote at Adobe Max shows that he believes this frictionless way of creating visuals is undoubtedly the future of Photoshop and Lightroom — once the small issue of copyright and ethical standards issues is resolved.

creative pilots

Adobe didn’t really mention the arrival of a “neural background filter” in Adobe Max 2022, but it did specify where the technology will eventually end.

David Wadhwani, Adobe’s head of digital media, said the company has the same technology as Dall-E, Stable Diffusion, and Midjourney; It has just opted not to implement it in its applications yet. “Over the past few years, we have invested more and more in Adobe Sensei, our artificial intelligence engine. I would love to refer to Sensei as your creative assistant,” Wadhwani said.

“We are working on new capabilities that can take our major flagship applications to whole new levels. Imagine being able to ask your creative assistant in Photoshop to add an object to the scene simply by describing what you want, or ask your fellow pilot to give you an alternative idea based on what you’ve already built It’s like magic.” Definitely go a few steps further than Sky replacement tool in Photoshop.

(Image credit: Adobe)

He said this while standing in front of a fake of what Photoshop with Dall-E powers (above) would look like. The message was clear – Adobe can create text-to-image conversion at this scale at this time, but was chosen not to.

But it was Wadhwani’s Lightroom example that showed how this type of technology can be more logically integrated into Adobe’s creative applications.

“Imagine if you could combine ‘gen-tech’ and Lightroom. So you could ask Sensei to turn night into day, a sunny photo into a beautiful sunset. Move shadows or change the weather. All this is possible today with the latest advances in generative technology.” , he explained, in an inaccurate reference to Adobe’s new competitors.

So why hold back while others are stealing AI-generated french fries? The official reason, and it certainly has some merit, is that Adobe has a responsibility to make sure that this new power is not used recklessly.

“For those unfamiliar with generative AI, it can simply conjure up an image from a text description,” Wadwani explained. “We’re really excited about what this can do for all of you but we also want to do it carefully.” . “We want to do it in a way that protects the forces and supports the needs of creators.”

What does this mean in practice? Although it’s still a bit vague, Adobe will be moving slower and more carefully than the likes of Dall-E. “This is our commitment to you,” Wadhwani told the Adobe Max audience. “We approach generative technology from a creator-focused perspective. We believe AI should enhance human creativity, not replace it, and should benefit creators, not replace them.”

This somehow explains why Adobe has, so far, only gone beyond the “neural background filter” in Photoshop. But that’s also only part of the story.

The long game

Despite being a giant of creative apps, Adobe is still very innovative – just check out some of the projects at Adobe Labs (Opens in a new tab)especially those that can transform real-world objects into 3D digital assets.

But Adobe is also vulnerable to being shocked by fast-moving competitors. The likes of Photoshop and Lightroom are designed as desktop-first tools, which means Canva has stolen a crawl for its easy-to-use, cloud-based design tools. This is why Adobe invested $20 billion figma Last month, a number more than what Facebook paid for WhatsApp in 2014.

The Nvidia RTX 4070 could be a powerful GPU — but it might not be for some time

(Image credit: Microsoft)

Does the same thing happen with the likes of Dall-E and Midjourney? Quite possibly, as Microsoft just announced that Dall-E 2 will be integrated into its new graphic design app (above), which is part of the Productivity 365 suite. AI image generators are heading into the mainstream, despite Adobe’s skepticism about how quickly that’s happening. have it.

However, Adobe also has a point about the ethical issues surrounding this great new technology. A large body of copyright is growing with the advent of AI image creation — and it is understandable that one of the founders of the Content Authentication Initiative (ICA), designed to tackle deepfakes and other manipulated content, might refuse to do all of generative AI.

However, Adobe Max 2022 and the arrival of the ‘neural background filter’ show that AI image creation will undoubtedly be a huge part of Photoshop, Lightroom, and image editing in general – it may take longer to appear in your favorite Adobe application.

It would have been better if this wasn’t just a science project

Big Blue was one of the system designers who caught the accelerator error early on and emphatically declared that in the long run, all types of high-performance computing will have some kind of acceleration. That is, a kind of specialized ASIC that the CPU does its math offload.

Perhaps, IBM is re-learning some lessons from that early HPC era a decade and a half ago, when it created the PowerXCell vector math accelerator and used it in the petaflop-capable “Roadrunner” supercomputer at Los Alamos National Laboratory, and is applying those lessons for the modern age of artificial intelligence.

One can hope that, at least, just to keep things interesting in the AI ​​arena, the company will take itself seriously in at least some sort of HPC (which is AI training for sure) as its IBM research arm appears to be. You do with the new AI acceleration module you’ve unveiled.

Not many details behind IBM Research’s AIU have been revealed, and so far the only thing anyone has is some history of IBM’s matrix and vector math units (which are not at all computational lax) and their use of mixed precision and A blog post talking about AIU specifically to go by.

The AIU unveiled by IBM Research will be based on a 5nm process and supposedly manufactured by Samsung, which is IBM’s partner in 7nm “Cirrus” Power10 processors for enterprise servers and its Telum System z16 processors for its mainframes. The Power10 chips contain very powerful matrix and vector math modules that are an evolution of designs that IBM has been using for decades, but the Telum chip uses IBM Research’s third-generation AI Core AI Core heuristics as the on-chip AI heuristics and AI training accelerator low resolution.

The Initial AI Core chip announced in 2018 He was able to do the math of the half-accuracy FP16 and the accumulation of single-precision FP32 and was instrumental in IBM’s work to bring Even less accurate data and processing for neural networks. After creating an AI accelerator for Telum z16 processor, Which we detailed here back in August 2021IBM Research has taken this AI accelerator as a building block and scaled it up on a single device.

Let’s review the AI ​​accelerator on the Telum chip before getting into the new AIU.

On the z16 chip, the AI ​​accelerator consists of 128 processor pieces, likely arranged in a 2D phase configuration with dimensions of 4 x 4 x 8 but IBM hasn’t been clear about that. This systolic matrix supports the mathematics of the FP16 matrix (and mixed precision variables) on the FP32 accumulative floating-point units. This is explicitly designed to support matrix and convolutional mathematics in machine learning – including not just inference but low-fidelity training, which IBM anticipates may happen on enterprise platforms. We think it might also support the FP8 quarter-precision format for AI training and inference in addition to INT2 and INT4 for AI inference that we see in An experimental quad-core AI Core chip unveiled by IBM Research in January 2021 For compact and portable devices. Telum’s CPU AI accelerator also contains 32 complex functions (CF), which support FP16 and FP32 SIMD instructions and are optimized for activation functions and complex operations. The list of supported special functions includes:

  • Activate LSTM
  • GRU . activation
  • Molten matrix multiplication, reference bias
  • double molten matrix (broadcast/broadcast)
  • Batch normalization
  • Fused torsion, Bias addition, Relu
  • Maxball 2 d
  • Average 2D pool
  • Soft Max
  • Real
  • sah
  • sigmoid
  • Add
  • offer or discount
  • multiply
  • swears
  • minute
  • the above
  • register

The prefetch unit and the rewrite unit are linked into the z16 core loop and the L2 cache, and also the links in the zero board which in turn are linked to the AI ​​core through the data transfer and coordination unit, which as the name suggests formats the data so that it can run through the matrix math unit to do Inference and get the result. The prefetch can read data from the storage board at a speed of more than 120GB/s and can store data in the storage board at a speed of more than 80GB/s; The data engine can pull data and push data from PT and CF centers into an AI module at a speed of 600 Gb/s.

on iron system z16IBM’s Snap ML framework and Microsoft Azure’s ONNX framework are in production, and Google’s TensorFlow framework has recently been in open beta for two months.

Now, imagine that you copied this AI accelerator from a Telum chip and pasted it into a design 34 times, like this:

These 34 cores and their non-core regions for storage, interconnecting cores, and the external system have a total of 23 billion transistors. (IBM says there are 32 cores in the AIU, but you can clearly see 34 cores, and so we think two of them are there to increase chip throughput on machines with 32 usable cores.)

Telum z16 processors weigh in at 5GHz, but the AIU isn’t likely to run anything close to that speed.

If you look at the AIU template, it has sixteen I/O controllers of some sort, which are probably generic SerDes that can be used for memory or I/O (as IBM did with their OpenCAPI interfaces for I/O and memory in the Power10 chip ). There seems to be Eight banks of Samsung LPDDR5 memory On the package too, that would be a total of 48GB of memory and provide about 43GB/s of total bandwidth. If these are all memory controllers, the memory can be doubled up to 96 GB/s and 86 GB/s total bandwidth.

The controller assembly at the top of the AIU die is likely a PCI-Express 4.0 controller, but hopefully a PCI-Express 5.0 controller with CXL protocol support built in.

IBM hasn’t said what kind of performance to expect with the AIU, but we can make some guesses. Back in January 2021, Quad-core AI Core chip debuted at ISSCC chipsetengraved by Samsung at 7 nm, which provided 25.6 teraflops of training FP8 and 102.4 teraflops of INT4 inference running at 1.6 GHz.

This AIU has 34 cores with 32 of them active, so its performance should be 8X assuming the clock speed remains the same (whatever that is) and 8X the on-chip cache. This will run on 204.8 teraflops for AI training in the FP8 and 819.2 teraflops for AI inference with 64MB of on-chip cache, in something south of a 400W power envelope when implemented at 7nm. But IBM is implementing it with Samsung at 5nm, and that probably puts the AIU at around 275W.

By comparison, the 350W PCI-Express 5.0 version of Nvidia’s “Hopper” GH100 GPU delivers 2TB/s of bandwidth over 80GB of HBM3 memory and 3.03 petaflops of FP8 AI training performance with sparse support.

IBM Research will need AI cores. Lots of AI cores.

Newsletter Subscription

View highlights, analysis and stories from the week from us straight to your inbox with nothing in between.
subscribe now

Source

New Dashlane Report Evaluates Global Password Integrity

New report from Password manager specialists Dashlan He took a look at the state of password security around the world, and claimed that many of us still aren’t protected as much as we need to.

In what it claims is the first global analysis of its kind, Dashlane used its own algorithm to measure the security of its users’ passwords and generate a health score out of 100.

The report revealed that Eastern Europe had the highest average score of 76.4, followed by the northern and western regions of the continent with 74.3 and 73.4, respectively. Southern Europe was among the worst performing countries globally, with an average score of 71.4.

Europe on top

In the next range of recordings were Central and South America, East and Southeast Asia, and South and East Africa, with scores ranging from 72 to 73.

The Middle East, Central and Southern Asia, North and West Africa, and Oceania were among the countries with the lowest scores. North America came in last with a score of 69.1, with nearly 20% of all its passwords hacked.

According to Dashlane, scores of 90 and above are good, with anything below requiring improvement, so it looks like the entire world needs to do better, which is something password generators likely to be used.

Dashlane logging algorithm

Dashlane scored its users based on the vulnerabilities it identifies and the quality of passwords that matter most, such as those used in banking, email and social media. Its algorithm constantly runs in the background of your system to make its assessments and focuses on four main areas.

Checks to see if there are any Data breaches related to your accounts by monitoring the dark web, and notifying potentially leaked passwords. It also deducts points from your score if you have any passwords similar to those that have been hacked. The algorithm will also check the number of reused or similar passwords across accounts, the more passwords you have, the lower your score.

The strength of each individual password is also measured using an industry standard zxcvbn نقاط points (Opens in a new tab) Same thing used on most websites and platforms that tell you the strength of a newly generated password.

The Dashlane algorithm also excludes certain passwords from the logging system, as they argue that not all passwords are indicative of your general health – some have certain restrictions that the user cannot influence, such as smartphone passcodes and Wi-Fi passwords. Credentials may also be excluded from a file Business password manager branch.

Who’s to Blame: UNM Professor Researches Artificial Intelligence’s Harm and Guilt: UNM Newsroom

Imagine that your identity was stolen or misconfigured online, resulting in serious personal damage. The reason was a bug rooted in artificial intelligence (AI) technology, equipment with no name or face.

So, who is to blame? Is it the company hosting the technology, the country you requested, the worker who created a particular piece of code, or someone else entirely?

Associate Professor Sonia Gibson Rankin, United Nations University School of Law

This Clue is a game that has had and will continue to have massive consequences and questions for years to come. However, UNU law professor Sonia Gibson Rankin is one step closer to discovering the answers.

“What happens when a country uses an algorithm to help society and actually harms society?” asked Gibson Rankin.

In a soon-to-be-published paper with the New York University Law Review titled MiDAS Touch: Atuahene’s ‘Stategraft and the Effects of Unregulated AI’ Gibson Rankin explores the Michigan Integrated Data Automated System (MiDAS) incident.

To combat the billions of dollars owed to the federal government set by the Great Recession, Michigan set its sights on modernizing and reducing what was seen as an unnecessary expense from the Unemployment Insurance Agency (UIA) with MiDAS in 2013.

The state spent hundreds of years and $47 million on this program. The goal of MiDAS was to automatically detect those who commit unemployment fraud, as well as determine eligibility for unemployment, track cases, and monitor income tax refunds.

From October 2013 to September. In 2016, MiDAS did its job – in fact, it found fraud cases tripled to over 25,000 in just one year. By two years, that total number exceeded 40,000. With unemployment fraud claims stretching far back to date, tens of thousands of people have faced a 400% higher end than usual. This generated $96 million, a glowing total that would have resulted in Michigan’s huge debt.

The only problem is that 93% of these accusations were false.

“The concern with applying AI in societies without proper supervision is that by the time we understand the damage has been done, it has already affected hundreds of thousands of people,” said Gibson Rankin.

There is something within AI that has bypassed the due process of individuals who were absolutely right. It was given permission to automatically find people, file applications with the IRS to receive wages, or deduct tax returns, regardless of how long they were unemployed, or how long they had been unemployed.

When Michigan citizens called to find out why this was happening, no one could give them an answer. Likewise, state officials found no evidence of fraud in the vast majority of cases.

“When people called, there was no one who could explain what happened or why The response is basically AI saying you did this.

Initially, the defendants turned to the UIA for answers. The UIA looked at the state. The country turned to technology vendors Fast Enterprises and SAS Institute. They turned to management consultant CSG Government Solutions.

They’ve all faced the same predicament: a guessing game who is to blame.

“If you stalk the state, they say AI did it. If you stalk the third-party seller, there’s a clause to protect them, saying the state made the decision. It leaves the actual person who has been harmed by AI without a lot of options,” she said.

After several trips to court, the state of Michigan agreed to withdraw $20.8 million so far In compensation to compensate the money deducted from the falsely accused of fraud.

That wasn’t enough, according to Gibson Rankin. Many of those affected felt the same.

in Kaho vs. SAS Analytics, The state felt that compensation was procured by granting refunds. The plaintiffs argued that their due process rights were violated beyond financial resources as they had to disentangle themselves from the fraud allegations.

“How do I give back or address the fact that you may have to file for bankruptcy? How do I address the fact that while all this was happening, you may have lost a new job because you have been labeled in the systems as committing fraud if Unemployment.” “How do I address the fact that families may have broken up – that people have been driven from their homes because of the name tags?”

The Michigan Supreme Court has sided with the plaintiffs, saying that trying to use an “artificial intelligence made me do this” defense turned out to be insufficient.

Not only that, but the state is still working on making the rest of the payments.

As residents work for their sanctuary, questions remain for legal minds like Gibson Rankin.

How do you prevent the biases that exist in artificial intelligence to begin with? Can you really get tech paradoxical as a result? How far will artificial intelligence go unanswered?

“When technology is unregulated, it will thrive in all kinds of unique innovations. But there are some parts of it when disorganization leads to serious disaster.” Sonia Gibson Rankin

In March 2022, Michigan Governor Gretchen Whitmer planned to allocate $75 million to replace the MiDAS system, in search of a “human-centred” system.

Gibson-Rankin believes that from now on, there must be groups and discussions in place to answer these questions before AI advances too big and into the weeds.

“I think we’re going to see a lot of what the AI ​​community does as it continues to operate underground, where people can’t unload the source of the damage,” she said.

She is also working with other professors on the potential development of a computational justice course at UNM.

“It’s going to take all of us sitting at the table to get it right from the start,” she said.

You can read this full research paper and learn more about the MiDAS incident by By following the link here.

Source

Exit mobile version