Connect with us

Spider-Man Remastered on the PS5 is replacing Peter Parker. Not the character, but his face. Sony-owned developer Insomniac Games have revealed that the Spider-Man role has been recast with Ben Jordan. He replaces John Bubniak, who played Peter Parker in Marvel’s Spider-Man on the PS4, although Yuri Lowenthal will still voice him, if you were worried. A subset of Marvel’s Spider-Man fans aren’t happy with the move though, because they believe the new version looks suspiciously similar to Tom Holland, who plays Peter Parker/ Spider-Man in the Marvel Cinematic Universe.

And you know what? They might have a point. Here are photos of Jordan (above), Bubniak, and Holland as Peter Parker, in addition to one of Lowenthal. Who do you think Jordan looks more like: Holland or Lowenthal?

John Bubniak as Peter Parker in Spider-Man on the PS4
Photo Credit: Insomniac Games

spider man tom holland spider man tom holland homecoming

Tom Holland as Peter Parker in Spider-Man: Homecoming
Photo Credit: Sony Pictures

yuri lowenthal wikimedia yuri lowenthal

Yuri Lowenthal, who voices and does facial capture for Spider-Man
Photo Credit: Gage Skidmore

Insomniac thinks it’s the latter, and that’s how they justify the recasting. In a PlayStation Blog post that carried the reveal, the Spider-Man Remastered developer said Jordan is a “better match” to Lowenthal’s “facial capture”. And this is how he’s going to stay when future Spider-Man PlayStation titles roll around.

It’s also worth noting that Sony, which owns PlayStation and Insomniac, is also the rights holder for Spider-Man films.

“Today’s news about the new Peter Parker face model has surprised some of you, and we at Insomniac totally understand your reaction,” Insomniac creative director Bryan Intihar tweeted. “Heck, it even took me awhile to get used to Peter’s new look. But as we discussed the franchise’s future and moving to the PS5, it quickly became apparent that delivering even more believable-looking characters made finding a better facial match for actor Yuri Lowenthal — who we all love as Peter — a necessity.

“We care as much about this character as your attachment to him, so please know we didn’t make this decision/ change lightly. As we did throughout the development of Marvel’s Spider-Man, we’ll continue to read your comments, listen, and always be looking or ways to improve every facet of the game. At the same time, I hope you can trust us that this decision is what we feel is best for the future of the franchise and our upcoming goals for this beloved Marvel character.”

Lowenthal tried to play down the backlash by pivoting to self-deprecating humour, noting that it’s the bones of his face that required the change.

Some fans aren’t buying Insomniac’s reasoning though and have pointed out that the Peter Parker in Marvel’s Spider-Man is someone who has been Spider-Man for eight years, while the new version looks like he’s still in school; just like Holland’s Peter Parker in the Marvel movies.

Spider-Man Remastered PS5 details

While that’s the biggest change, Insomniac Games shared some other details for Spider-Man Remastered on the PS5, including a 60fps performance mode, improved assets and models, ray-traced reflections and ambient shadows, individually-rendered strands of hair, near-instant load times, 3D audio on compatible headphones, new photo mode features, and three new suits.

Spider-Man Remastered is out November 12 on the PS5. It’s available as part of Spider-Man: Miles Morales Ultimate Edition ($70 in the US, TBA in India) or as a paid in-game upgrade with Spider-Man: Miles Morales Standard Edition ($50 in the US, TBA in India).



Source link

0

Technology

The Mandalorian Season 1 Recap Distills the Star Wars Series Into 89 Seconds

mandalorian season 1 1603956532089

Before The Mandalorian season 2 premieres Friday afternoon on Disney+ Hotstar (and Friday midnight on Disney+ in the US), Disney and Lucasfilm have given us an official 89-second recap of The Mandalorian season 1. That’s very brief, but it speaks to the fact that The Mandalorian wasn’t a narratively-heavy show on its debut last year.

Everything You Need to Know About The Mandalorian Season 2

The Mandalorian season 1 recap touches upon Mando’s (Pedro Pascal) profession (he’s a bounty hunter), his newest target (Baby Yoda), the people he meets along the way — Cara Dune (Gina Carano), Greef Karga (Carl Weathers), and Kuiil (voiced by Nick Nolte) — and the consequences of his decision to bring Baby Yoda under his wing.

“You have something I want. It means more to me than you will ever know,” the darksaber-wielding villain Moff Gideon (Giancarlo Esposito) says deep into The Mandalorian season 1 recap, as we are given a reminder of the Star Wars series’ action-heavy side. Gideon then declares: “It will be mine.”

The season 1 recap wraps by setting up The Mandalorian season 2, as tribe leader The Armorer (Emily Swallow) instructs Mando to reunite Baby Yoda “with its own kind”. Mando wonders: “You expect me to search the galaxy for the home of this creature?” Well, yes, otherwise what would we do in season 2, Mando.

In addition to Pascal, Carano, Weathers, and Esposito, The Mandalorian season 2 also stars Omid Abtahi as Dr. Pershing, Horatio Sanz as Mythrol, Rosario Dawson as Ahsoka Tano, Katee Sackhoff as Bo-Katan Kryze, Temuera Morrison as Boba Fett, Timothy Olyphant as former slave Cobb Vanth, Michael Biehn as a rival bounty hunter, and Sasha Banks in an undisclosed role.

Jon Favreau (The Lion King, Iron Man) created The Mandalorian and serves as showrunner and head writer on the Star Wars series. Favreau and Weathers are among the directors on season 2 alongside Dave Filoni, Rick Famuyiwa, Bryce Dallas Howard, Peyton Reed, and Robert Rodriguez.

The Mandalorian season 2 premieres October 30 on Disney+ Hotstar in India. Episodes will air weekly.

Source link

0
Continue Reading

Technology

Wild West for developers when it comes to writing cloud-native apps

istock 1173805290

Commentary: Containers ate your infrastructure, but what comes next at the application layer? A new survey points to big, industry-wide decisions to be made about the tech used to write applications.

Image: vladans, Getty Images/iStockphoto

Twenty years ago it seemed certain that the underpinnings of future data center infrastructure would be Linux clusters running on x86 “commodity” hardware. We just didn’t know what to call it or where exactly it would run.

The big systems vendors like IBM, Sun, HP, and Cisco weren’t calling it “cloud”; instead, the vendors named it utility computing, autonomic computing, grid computing, on-demand, and n- other terms. At Comdex 2003, it was reported on ZDNet that “participants in a panel discussion at Comdex agree that utility computing is more like a river than a rock, but have little luck nailing down a real definition.” (ZDNet is a sister site of TechRepublic.)

Two decades later we know what to call it (“cloud”), and we know it’s built with containers and a whole lot of Linux. As detailed in the new Lightbend survey Cloud Native Adoption Trends 2020-2021, 75.2% of respondents already host the majority of their applications in some sort of cloud infrastructure, and roughly 60% run most of their new applications in Kubernetes/containers.

Now we’re faced with another major rethink that will affect tens of millions of developers operating at the application layer, where there are common threads on crucial concepts, but everyone is bringing different and predictions for the future.

SEE: Top 5 programming languages for systems admins to learn (free PDF) (TechRepublic)  

Higher up the stack

We’ve all felt this happening as containers have eaten into the virtual machine landscape, as web programming languages (JavaScript) surpass JVM/server-side languages (Java) in developer popularity, and as serverless, JAMstack, and other still-being-named phenomena change the “developer experience” in writing cloud-native applications. The diversity of choice in the “right way” to write software for the cloud has become a bit of a Wild West for developers.

As Google developer advocate Kelsey Hightower put it earlier this year, “There’s a ton of effort attempting to ‘modernize’ applications at the infrastructure layer, but without equal investment at the application layer, think frameworks and application servers, we’re only solving half the problem.”

“There’s a huge gap between the infrastructure and building a full application,” said Jonas Bonér, CTO and co-founder at Lightbend, in an interview. “It’s an exercise for the programmer to fill in this huge gap of what it actually means to provide SLAs to the business, all the things that are hard in distributed systems but needed for the application layer to make the most of Kubernetes and its system of tools.”

SEE: Top cloud trends for 2021: Forrester predicts spike in cloud-native tech, public cloud, and more (TechRepublic)

Lightbend’s cloud adoption report highlights some of these major decision points that remain murky for the application layer of the cloud-native stack.

“Building cloud-native applications means creating software that is designed with the advantages—and disadvantages—of the cloud in mind,” said Klint Finley, author of the Lightbend survey. “It means taking advantage of the fact that it’s possible to outsource entire categories of functionality—like databases and authentication—to public cloud services and planning for the fact that communication between those cloud components might be unreliable.”

Some developers still prefer frameworks they personally maintain and scale, while the business side clearly prefers frameworks that are delivered “as a service” via API, the survey says. Namely, 54.7% of managers said it was their highest priority to write business applications that specifically leverage the underlying cloud infrastructure vs. 38.3% of developers. Meanwhile, consuming back-end services by API rather than building and maintaining your own is the defining characteristic of the emergent JAMstack (JavaScript / API / Markup) architecture that has the weight of Facebooks’ React programming language momentum behind it. But it’s a completely different approach than the old-school server-side mindset for Java developers that still rule the roost and command vast legacy systems at most major enterprises.

The survey also suggests that developers think about cloud computing more in terms of specific technologies like Kubernetes and containers, while management thinks of cloud computing more as a new way to build applications. Management tends to prefer outsourcing as much maintenance as possible, while developers’ preference for configurability over automation reveals a desire not to lose too much control over the many layers of an application stack. As one respondent put it: “SaaS comes with ease of adoption and faster time to market, however many do not understand the cost of running them at scale.”

Disclosure: I work for AWS, but the views expressed herein are mine.

Also see



Source link

0
Continue Reading

Technology

New materials help expand volumetric 3D printing

Researchers at Lawrence Livermore National Laboratory (LLNL) have adapted a new class of materials for their groundbreaking volumetric 3D printing method that produces objects nearly instantly, greatly expanding the range of material properties achievable with the technique.

New materials help expand volumetric 3D printing

Using a custom volumetric additive manufacturing 3D printer, Lawrence Livermore researchers were able to build tough and strong, as well as stretchable and flexible, objects nearly instantly from a class of materials known as thiol-ene resins. Photo by Maxim Shusteff/LLNL.

The class of materials adapted for volumetric 3D printing are called thiol-ene resins, and they can be used with LLNL’s volumetric additive manufacturing (VAM) techniques, including Computed Axial Lithography (CAL), which produces objects by projecting beams of 3D-patterned light into a vial of resin. The vial spins as the light cures the liquid resin into a solid at the desired points in the volume, and the uncured resin is drained, leaving the 3D object behind in a matter of seconds.

Previously, researchers worked with acrylate‐based resins that produced brittle and easily breakable objects using the CAL process. However, the new resin chemistry, created through the careful balancing of three different types of molecules, is more versatile and provides researchers with a flexible design space and wider range of mechanical performance. With thiol-ene resins, researchers were able to build tough and strong, as well as stretchable and flexible, objects, using a custom VAM printer at LLNL. The work was recently published in the journal Advanced Materials and highlighted in Nature.

“These results are a key step toward our vision of using the VAM paradigm to significantly expand the types of materials that can be used in light-driven 3D printing,” said LLNL engineer Maxim Shusteff, the work’s principal investigator and head of a Laboratory Directed Research & Development project in advanced photopolymer materials development.

In the paper, researchers also demonstrated the first example of a method for designing the 3D energy dose delivered into the resin to predict and measure it, successfully printing 3D structures in the thiol‐ene resin through tomographic volumetric additive manufacturing. The demonstration creates a common reference for controlled 3D fabrication and for comparing resin systems, researchers said.

The team concluded the work represents a “significant advancement” for volumetric additive manufacturing as they work toward their goal of producing high‐performance printed engineering polymers, with particular emphasis on using thiol‐ene materials in biological scaffolds. Thiol‐ene materials have shown promise for applications including adhesives, electronics and as biomaterials, researchers said.

“By implementing a nonlinear threshold response into a broad range of chemistries, we plan to print with resins such as silicones or other materials that impart functionality,” said LLNL materials engineer Caitlyn Cook.

By studying how the resin behaves at different light dosages, researchers added they aim to improve the agreement between computational models and experiments and apply photochemical behavior to the computed tomography reconstructions that produce the 3D models used to build objects.

Source: LLNL

 




Source link

0
Continue Reading

Trending