Connect with us

This comprehensive guide covers what serverless computing is and how it could reduce the complexity and cost of your cloud infrastructure.

The world’s major cloud providers are rushing to offer serverless computing services. Here’s everything you need to know about what it means to go serverless and how it could benefit your business.

SEE: How to build a successful career as a cloud engineer (free PDF) (TechRepublic)

Executive summary

  • What is serverless computing? Pay-per-use services that allow you to run snippets of back-end code for apps and websites without any overhead of managing servers.
  • Why does serverless computing matter? Serverless computing services reduce the overhead and cost of managing either physical or virtual infrastructure.
  • Who does serverless computing affect? Businesses running websites and apps with a need for back-end services or analytics.
  • When is serverless computing happening? Now–all of the major cloud computing platform providers offer a serverless computing service.
  • How can I get serverless computing? Via your cloud platform of choice. Serverless computing is available via AWS, Google Cloud Platform, Microsoft Azure, CloudFlare Workers, and IBM Cloud Functions.

SEE: All of TechRepublic’s cheat sheets and smart person’s guides

What is serverless computing?

Serverless computing is a category of cloud computing service that encapsulates two of the main selling points of the as-a-service model–offering computing that is nearly entirely hands-off and where you really do only pay for what you use.

With serverless computing there is no virtual infrastructure for the user to manage and the user is only billed for when their code is running, down to the nearest 100 milliseconds (or, in the case of AWS Lambda, a single millisecond). Everything from the scaling to the fault tolerance and redundancy of the underlying infrastructure is taken care of. There are servers involved, of course, it’s just the user doesn’t have to worry about any aspect of looking after them as all of that is handled by the cloud service provider.

SEE: Cloud computing policy (TechRepublic Premium)

Serverless computing services, such as AWS Lambda, are built to run snippets of code that carry out a single short-lived task.

These small self-contained blocks of code, known as functions, have no dependencies on any other code and, as such, can be deployed and executed wherever and whenever they are needed. An example of a Lambda function could be code that applies a filter to every image uploaded from a website to Amazon’s S3 storage service.

Unlike a cloud application where the backend code is structured in a more monolithic fashion and may handle several tasks, code running on serverless services like Lambda is more typical of that found in a microservices software architecture. Under this model, applications are broken down into their core functions, which are written to be run independently and communicate via API.

These small functions run by serverless services are triggered by what are called events. Taking Lambda as an example, an event could be a user uploading a file to S3 or a video being placed into an AWS Kinesis stream. The Lambda function runs every time one of these relevant events is fired. Once the function has run the cloud service will spin down the underlying infrastructure. This approach results in users being billed only for the time the code is running, in the case of AWS Lambda and its Microsoft Azure alternative, down to the nearest 100ms.

The name Lambda is derived from the term Lambda function, which refers to small anonymous functions used in Lambda calculus.

SEE: Cloud computing: More must-read coverage (TechRepublic on Flipboard)

In some respects, while the technology that underpins serverless computing is relatively new, the idea isn’t. In fact, it’s been around since the dawn of the information age, as John Graham-Cumming, CTO of internet services company Cloudflare, told the QCon London 2019 conference.

Graham-Cumming referenced a paper written by D. J. Wheeler in 1952, ‘The use of sub-routines in programmes’, which talks about packaging computer instructions up into small reusable units, albeit with the expectation these instructions would be stored on punched tape. The paper even argues that ‘all the complexities should, if possible, be buried out of sight,’ very much in line with the existing vision for serverless computing.

“In a way, nothing has changed since 1952, it’s just that we’ve got all this compute power,” says Graham-Cumming, who referred to serverless computing as “functions-as-a-service” or FaaS.

“What’s interesting about functions-as-a-service is it’s about getting back to this sense of ‘I can just write a program and it’ll run anywhere.'”

Additional resources:

Why does serverless computing matter?

This more granular, pay-per-use approach can be much cheaper for the right kind of workloads. A case in point is the web app that was built to allow football fans to upload themselves singing along with the official Euro 2016 anthem by David Guetta. The agency that built the web app, Parallax, used AWS Lambda for various functions, including generating a custom album artwork based on information shared by the user.

James Hall, director of Parallax, said the service needed an infrastructure that could cope from going from no users to “millions” after Guetta read out the web address on TV.

Rather than paying to repeatedly spin virtual machines up on AWS EC2 in an attempt to meet demand, the team instead paid only for the time their code was running on Lambda.

Using “a more traditional architecture” with an AWS EC2 Auto Scaling pool would have cost between £500 to £1,500 per month, he said, compared to “less than £300 a month” it ended up costing to use Lambda and other AWS services to power the site.

The first one million requests on Lambda per month are free, followed by $0.2 per million requests thereafter, so for “for small applications, you could ostensibly run for absolutely nothing”, said Hall. That same free tier is available in Microsoft’s competing service, Azure Functions, with AWS’ and Microsoft’s free options also restricting usage to 400,000 gigabyte seconds (GB/s) or under, where GB/s is the memory used by the function multiplied by the time the function is running.

Using Lambda can also be simpler than meshing together several other AWS services yourself, said Hall.

“The reason doing that in Lambda is so good is because you don’t have a load of servers sat around waiting to process these images, you don’t need to write a queuing system, you don’t need any of that plumbing and leg work, you can just have one function that returns an image,” he said.

SEE: AWS Lambda: A guide to the serverless computing framework (free PDF) (TechRepublic)

Serverless services or FaaS offerings, such as Lambda, are not suited to running existing applications without those applications being rewritten, however, due to the code they run being structured very differently to that found inside most existing apps and not being able to rely on the state of the application being saved. Serverless services run small modular functions, which are event-driven, as explained above, and stateless.

Long-running computational tasks are also not suited to Lambda at present, with each function running for a maximum of 15 minutes, although Google’s serverless offering doesn’t have this restriction. A Lambda function can also only be executed 1000 times concurrently, although this limit can be raised on request.

Not every application is suited to being run on a serverless computing platform, with the vision being that serverless code will form part of an application, serving as part of a larger whole. That said there are plenty of use cases today for serverless computing. AWS Lambda can easily be integrated with many different AWS services and NoSQL databases via API and there are no charges for transferring data between Lambda and a range of other AWS services within the same AWS region. Amazon has also recently added an automated tool for SQL database users to migrate to Amazon Aurora Serverless, a new serverless version of its PostgreSQL platform.

For Lambda, other possible uses include data processing, with Lambda carrying out real-time ETL on data within a Kinesis stream and loading it into a database after it’s been transformed. A more concrete example is Lambda@Edge function could be used to select the type of content a web application should return to a user, based on their location and the type of device they’re using. Lambda functions can also be used to glue AWS services together, being triggered by Auto Scaling events or CloudWatch alarms and in turn calling other AWS services.

“We think there’s a revolution going on called network-based serverless or network-based functions-as-a-service,” says Cloudflare’s Graham-Cumming.

“The idea is you start to forget about where the code runs, you have it all over the world, near where end users are and it just gets distributed and executed at scale.”

Like every computing paradigm, serverless has its drawbacks, particularly in its current form. One recent paper by UC Berkeley researchers highlighted the issues with the limited lifetimes of serverless instances, the network bottlenecks from constantly shunting data back and forth, and the fact that specialized chips such as GPUs aren’t available via serverless offerings such as AWS Lambda. Another common criticism is the latency when running a serverless function for the first time, stemming from the time it takes to spin up the underlying IT infrastructure.

However, the researchers are ultimately optimistic that many of the limitations in offerings today will be overcome by future services, and the major cloud platform providers are trying to address shortcomings.

For example, not only do many serverless offerings today require applications to be rewritten, they generally only support a limited range of languages and increase the risk of locking users into a single computing platform.

To try to address these limitations, Google launched Cloud Run, which aims to provide the managed infrastructure benefits of serverless while allowing users to deploy a wider range of existing applications, to write code in the language of their choice, and be able to more easily move that code to different platforms.

Cloud Run allows developers to deploy code inside a stateless HTTP container, with the service taking care of provisioning and management of the underlying infrastructure, including scaling with demand and, like other serverless offerings, is pay-per-use to the nearest 100ms. AWS provides a similar service in AWS Fargate, allowing users to run containers without having to manage servers or clusters, although it offers slightly less granular pay-per-use options.

Additional resources:

Who does serverless computing affect?

Serverless computing services are available on all of the major cloud platforms, both to individuals and businesses.

Firms from many sectors are using AWS Lambda, including Coca-Cola, Major League Baseball, AdRoll, Localytics for app usage analytics, FireEye, which built an intrusion detection system that processes events using Lambda, and US retailer Bustle. Amazon’s own smart virtual assistant Alexa is also built in-part on Lambda.

AWS technical evangelist Ian Massingham says Lambda is well suited to web applications, analytics and IoT and, as such, should have cross-sector appeal.

Additional resources:

When is serverless computing happening?

You can get started with serverless computing right now. Since Amazon revealed Lambda in November 2014, competing services have been launched on each of the major cloud platforms.

Microsoft’s Azure Functions, Google’s Google Cloud Functions, and Cloudflare Workers all offer similar serverless products to AWS Lambda.  

Additional resources:

What serverless computing options are available?

Various serverless computing services are available via the major cloud platforms but the most mature is AWS Lambda. There are plenty of guides to getting started with setting up a Lambda instance.

Lambda functions can be written in the Java, Go, PowerShell, Node.js JavaScript, C#, Python, and Ruby code. There’s also a Runtime API that allows developers to use any additional programming languages to author their functions and there are software frameworks that help you build serverless applications, such as the appropriately-named Serverless Framework.

Other options for serverless services include Google Cloud Functions, which only supports Node.js JavaScript, Python, and Go but allows for unlimited execution time for functions. Microsoft’s Azure Functions, which supports a wider range of languages including Bash, Batch, C#, F#, Java, JavaScript (Node.js), PHP, PowerShell, Python, and TypeScript, and that has similar pricing to Lambda. There’s also IBM Cloud Functions, whose language support includes JavaScript (Node.js) and Swift, and Cloudflare’s Workers option, which runs functions written in JavaScript or any language that can be compiled to WebAssembly.

Additional resources:

cloud-computing.jpg

Image: Getty Images/iStockphoto

Source link

0
Continue Reading

Technology

Linux 101: Renaming files and folders

In your quest to migrate to the Linux operating system, you’ve found the command line interface a must-know skill. Fortunately, Jack Wallen is here to help you with the basics.

I’m going to help you learn a bit more about Linux. If you’re new to the operating system, there are quite a few fundamental tasks you’re going to need to know how to do. One such task is renaming files and folders. 

You might think there’s a handy rename command built into the system. There is, but it’s not what you assume. Instead of renaming a file or folder, you move it from one name to another, with the mv command. This task couldn’t be any easier. 

SEE: Linux: The 7 best distributions for new users (free PDF) (TechRepublic)

For instance, if you have a file named script.sh and you want to rename it backup.sh. For that you’d issue the command: 

mv script.sh backup.sh

The first file name is the original and the second is the new name–simple. For folders, it’s the same thing. If you have a folder named “project” and you want to rename it “python_projects.” For that, you’d issue the command: 

mv projects python_projects

One nice thing about the mv command (besides its simplicity) is that it does retain the original directory attributes, so you don’t have to worry about reassigning things like permissions and ownership. Even if you issue the command with sudo privileges, it won’t shift the directory ownership to root. 

Another handy feature is that you don’t have to leave the file in the same directory. If you have script.sh in your home directory and you want to rename it to “backup.sh” and move it to /usr/local/bin/ at the same time. Once again, that’s as simple as:

 sudo mv script.sh /usr/local/bin/backup.sh

The reason why you have to use sudo is because the /usr/local/bin directory is owned by root, so your standard user won’t have permission to move the file into the directory. 

And that’s all there is to renaming files and folders from the Linux command line. Enjoy that new skill.

Subscribe to TechRepublic’s How To Make Tech Work on YouTube for all the latest tech advice for business pros from Jack Wallen.

Also see

linux101hero.jpg

Image: Jack Wallen

Source link

0
Continue Reading

Technology

Improving Makeup Face Verification by Exploring Part-Based Representations

lipstick makeup 791761 1280

Facial recognition has been more and more widely used recently; however, there are some issues in this field. One of them is facial makeup because it can change the facial appearance and compromise a biometric system. A recent study suggests a technique to improve facial recognition with makeup.

Improving Makeup Face Verification by Exploring Part Based Representations

Image credit: kaboompics via Pixabay, free licence

It explores part-based representations. Different parts of a face are affected by cosmetics differently; therefore, this approach can increase the accuracy of face recognition. Two strategies of cropping the face are analyzed.

Firstly, splitting into four components: left periocular, including the eye and eyebrow, right periocular, nose, and mouth. Secondly, dividing the face into three facial thirds. After cropping, features are extracted using convolutional neural networks (CNN) and fused with the holistic score. The results show that this approach let to achieve improvements even without fine-tuning or retraining CNN models.

Recently, we have seen an increase in the global facial recognition market size. Despite significant advances in face recognition technology with the adoption of convolutional neural networks, there are still open challenges, as when there is makeup in the face. To address this challenge, we propose and evaluate the adoption of facial parts to fuse with current holistic representations. We propose two strategies of facial parts: one with four regions (left periocular, right periocular, nose and mouth) and another with three facial thirds (upper, middle and lower). Experimental results obtained in four public makeup face datasets and in a challenging cross-dataset protocol show that the fusion of deep features extracted of facial parts with holistic representation increases the accuracy of face verification systems and decreases the error rates, even without any retraining of the CNN models. Our proposed pipeline achieved state-of-the-art performance for the YMU dataset and competitive results for other three datasets (EMFD, FAM and M501).

Research paper: de Assis Angeloni, M. and Pedrini, H., “Improving Makeup Face Verification by Exploring Part-Based Representations”, arXiv:2101.07338. Link: https://arxiv.org/abs/2101.07338




Source link

0
Continue Reading

Technology

Nokia 1.4, Nokia 6.3, and Nokia 7.3 May Launch in Late Q1 or Early Q2 This Year

nokia white bg reuters 1603260968570

Nokia 1.4, Nokia 6.3, and Nokia 7.3 could launch in Q1 or early Q2 of this year, a new report claims. All three of these Nokia phones have been in the news in the past with speculations around their release. While Nokia 1.4 is relatively new, Nokia 6.3 and Nokia 7.3 were originally expected to launch in the third quarter of 2020. It is also possible that whenever these two Nokia phones launch, they could also be named Nokia 6.4 or Nokia 7.4.

Starting with Nokia 1.4, the phone has made its way through multiple listings, as per a report by Nokiapoweruser. The report states that the phone may launch in February. Recently, specifications and pricing for Nokia 1.4 were tipped, suggesting a 6.51-inch HD+ LCD display, a quad core processor, 1GB + 16GB storage configuration, and dual-rear camera setup. The phone is expected to be priced under EUR 100 (roughly Rs. 8,800).

Nokia 6.3 and Nokia 7.3, on the other hand, have been in the news for quite a while now. The report by Nokiapoweruser states that they may launch late in the first quarter or early in the second quarter of 2021. These phones could also launch as Nokia 6.4 and Nokia 7.4.

Nokia 6.3 has been tipped in the past to come with the Qualcomm Snapdragon 730 SoC and a 24-megapixel shooter. Nokia 7.3 may feature a 6.5-inch full HD+ display with a hole-punch cutout and be powered by the Qualcomm Snapdragon 690 SoC. It could come with a 48-megapixel primary sensor and a 24-megapixel selfie shooter. Nokia 6.3 and Nokia 7.3 are expected to be backed by a 4,500mAh and a 5,000mAh battery, respectively.

Originally, the two phones were expected to launch at IFA 2020 in September. Then, it was reported that they may launch in November. Even now, Nokia or brand licensee HMD Global has not shared any information on the launch date for these phones.


Is Android One holding back Nokia smartphones in India? We discussed this on Orbital, our weekly technology podcast, which you can subscribe to via Apple Podcasts, Google Podcasts, or RSS, download the episode, or just hit the play button below.

 

Source link

0
Continue Reading

Trending