The Hidden Power of What You Say and Don’t Say

“I will tolerate any dissension up there. My word will be final and binding, without exception. I will not accept any decision that you disagree with. I will discuss it afterward, but not while we are up on the hill.
These were the words of Rob Hall, Adventure Consultants’ mountain guide. He was trying to descend from Everest’s summit when he was caught in a snowstorm and tragically lost his life. He wasn’t the only one to die.
His team lost three members, along with Scott Fischer, Mountain Madness’ lead guide, in an incident that was captured in mountaineering literature as well as film and dubbed the ’96 Everest disaster’.
One guide stated that a specific cause of this catastrophe was ‘to promote an understanding of all things that is not possible to be proven by politicians, Gods, or drunks’.
There were many contributing factors. One thing that was not explored, but could be of great value to future expeditions, as well as complex corporates and organisations in a wider context, is the use language.
David Marquet, a former US Navy submarine captain, is the author of the international bestseller Turn the Ship Around. He recently published a second book entitled Leadership is Language, the Hidden Power of What You Say and What you Don’t. This book examines the language used by the crew of El Faro, a cargo ship that sank in the Bahamas during Hurricane Joaquin, on October 1, 2015. All 33 people aboard were killed.
There are striking parallels to the 1996 Everest disaster. Marquet’s observations about the use of language suggest that more consideration of our words spoken and unspoken might help to prevent disaster in the mountains. This would also be relevant to the running of organisations around the world and the management of projects, especially when it comes down to speaking truth to power or addressing issues that leaders might be reluctant to address.
Rob Hall was a much more experienced climber than his clients and held a position as the expedition’s leader.
He had accompanied 39 clients to the summit. He would have seen firsthand, many times, the symptoms of summit fever. This is a condition that occurs when climbers are so ambitious that rationality is abandoned. Emotions rule and climbers move inexorably toward the summit, often without regard for the cost to others and themselves.
It is actually a particular example of a larger cognitive bias known as the “sunk cost effect”, which is the tendency for people not to abandon a course of action they have already taken substantial investments in. I’ve made it this far, I re-mortgaged my house, and I’m not giving in now.
Rob Hall devised the universally accepted “one o’clock rule” to avoid self-inflicted injury. It states that climbers must reach summit by 1.00pm to 2.00pm at most. If not, they must turn around and return to high camp to ensure safe return to high camp before oxygen bottles run dry.
Hall’s clear declaration of authority, “I will tolerate any dissension up there …’,” would have served to warn anyone who might disagree with him if they were to turn around before the summit. It also served to ensure that his clients remained passive, unfortunately.
The message was that Hall was the one who thought, not they, and discouraged them raising concerns when this was, as it turned, perfectly appropriate.
Unfortunately, Hall’s assertion of authority did not include the unpleasant truth that he could also be inflicted by summit fever, even if he was speaking on behalf of another person.
Hall assisted Doug Hansen, his client, with the last few fes around 4.00pm.

The Exposure Triangle: Aperture

It is essential to understand the exposure triangle in order to capture high quality footage that is perfectly exposed for your project. What is the exposure triangle? Proper exposure is important for both still and video photos. Aperture, Shutter Speed and ISO. In future articles, I will cover Shutter Speed as well as ISO. Here, we’ll be focusing on aperture.
What exactly is aperture?
Aperture can be viewed as the pupil of the lens. It allows more light to the sensor when it is open (lower f/stop#). It allows LESS light to the sensor if it is more closed (higher f/stop#).
A camera aperture has a few more features. It controls the amount of light hitting it, as well as Depth of Field. This is how blurred the background is in your photo or video. This is important because when you open the aperture to let more light in, it also SHOWS your Depth of Field. This can make it a little harder to keep your focus on your subject, especially if they are moving.
Here’s an example of a Buck Showalter bobblehead. (Go O’s! ).
The first image (top right) shows how the subject is sharply focused and the background blurry. This is sometimes called bokeh. In the final image, the subject remains in sharp focus, but the background is almost in focus. The exposure of the images has not changed. This is because I used a much LONGER shutter speed to capture the image, as I closed the aperture to f/22.
The aperture controls how much light is allowed onto the sensor. If I want to alter the amount of light hitting the sensor and maintain the same exposure, then I need to modify something else in the triangle. In this instance, I had to adjust the shutter OPEN for longer periods of time in order to maintain the same exposure because I was closing down the aperture (allowing less light onto the sensor). I could have kept the shutter speed unchanged and adjusted the ISO sensitive, but that comes with its own set of considerations. This is where the triangle comes into play. We’ll be covering the other corners of the triangle in future posts.
Hopefully, you now realize that aperture is more than an exposure setting. You can open the aperture to make your photo appear brighter, or you can close it to make it darker. But there’s more to it. Rarely do I think of changing the aperture to alter exposure. The powerful tool of aperture can make your photos more visually appealing. Aperture can only be used creatively if you understand how it affects your exposure.
Here’s another example.
A couple of years ago, I had the chance to shoot a music clip for Nelly’s Echo, a good friend and local musician. You may also recognize him from The Voice.
It was a beautiful day in Baltimore. It was difficult to balance all the bright sunlight bouncing off of the buildings while still keeping the background creamy and blurred in most shots. Nelson, the lead singer of the band and the namesake, was walking down the street while he sings. We alternated between a wider shot that showed Nelson from the waist up and gives you a sense for the place by showing the movement of the city and a closer shot to capture the emotion in his face.
The wider shot was to show the city but not distract from Nelson. The background should be blurred, but not completely out of focus. I wanted the focus to be solely on Nelson for the tighter shot. This is where I was able use my knowledge of aperture to my advantage.
This is also

Factory Machinery Makers Win With the Electric Vehicle Boom

Factory equipment manufacturers who supply the highly automated picks, shovels, and other tools needed to prospect for the EV gold rush are enjoying a boom in investment from both established and new automakers in the electric car market.
The recovery in U.S. manufacturing has led to good times for robot makers and other equipment manufacturers. According to the U.S. Census Bureau, new orders rose to nearly $506 million in June after falling to $361.8 Million in April 2020 due to COVID.
Here’s a graphic on U.S. manufacturing new orders: https://graphics.reuters.com/AUTOS-PLANTS/EQUIPMENT/zjvqkkqjbvx/index.html
New factories for electric vehicles are being built by investors who have bought shares in newly public companies like Lucid Group Inc, which is boosting demand. “I don’t think it’s reached its peak yet. Andrew Lloyd, the leader of the electromobility segment at Stellantis-owned Comau, stated in an interview that there is still much to be done. “Over the next 18-24 months, there will be a significant demand for our products.”
The success of Tesla Inc has accelerated growth in the EV sector. This is on top of the usual work that manufacturing equipment manufacturers do to support the production of gasoline-powered cars.
According to LMC Automotive, automakers will invest more than $37 billion in North American plants between 2019 and 2025. 15 of 17 new American plants will be built in the United States. More than 77% of this spending will go to EV or SUV projects.
Equipment providers don’t rush to increase their capacity.
Comau’s Lloyd said that there is a natural point at which we will say “No” to new business. According to industry officials, automakers can spend anywhere from $200 million to $300 millions on a single area of a factory like a body shop or paint shop.
“WILD, WILD WEST” John Kacsur, Rockwell Automation’s vice president for the automotive and tire segments, said to Reuters that there is a mad race to get these new EV variants on the market. According to Laurie Harbour, an industry consultant, there is a mad rush to bring these new EVs to market. According to Kacsur, automakers have signed agreements with suppliers to build equipment for 37 EVs in North America between this year 2023. This excludes all work being done on gasoline-powered vehicles.
Mathias Christen, a Durr AG spokesperson, said that there is still a pipeline of projects from new EV manufacturer. Durr AG specializes in paint shop equipment. It saw its EV business grow by about 65% last fiscal year. “This is why the peak is not yet seen.”
Kuka AG, a Chinese manufacturing automation company, received 52% more orders than expected. It now stands at just under 1.9 billion euros ($2.23 Billion). This is due to strong demand from Asia and North America.
Mike LaRose CEO of Kuka’s Americas auto group, stated that “we ran out of capacity for any extra work around a year and half ago.” “Everyone is so busy, there’s no room for everyone.”
Kuka is building electric vans at its Michigan plant for General Motors Co to meet demand before the No. One U.S. automaker will replace equipment at its Ingersoll plant in Ontario next year to handle regular work. Although automakers and battery companies must order robots and other equipment 18 months in advance, Neil Dueweke, vice-president of automotive at Fanuc Corp, stated that customers want their equipment sooner. He calls this the “Amazon effect” within the industry.
Dueweke said that he built a facility with 5,000 robots, and that shelves were stacked 200 feet high. This is almost as far as the eye can see. Dueweke also noted Fanuc America’s market share and sales records last year.
Some automakers have also experienced delays and problems due to COVID when trying to upgrade their vehicles.
R.J. Scaringe, CE

Everybody’s business has the best projects

You may be familiar the story of Everybody (Somebody, Anybody, Nobody) To summarize, there was a crucial job to do and although Everybody was asked to complete it, nobody did. Somebody got mad because Anyone could have done it.
It’s a funny linguistic joke, but it’s also a classic example of project management gone wrong. Instead of clear communication and managed expectations, good morale and clear communication, there is confusion and missed deadlines. It’s all about going to hell in a handcart if there isn’t a serious course correction.
Transparency is key to project management success. Visibility is not only for one employee, but also for the larger expectations, company goals, and input from others, from colleagues to third-party providers, will ensure smoother progress and better morale.
Transparency is more than just everyone logging in to the same GANTT charts, although that’s a good start. It covers a wide range of issues, including employee empowerment, leadership structure, and project construction.
What does transparency look like in the best-in-class project management environment? It’s all about clarity of expectations. Although it may seem simple, when things are moving quickly it can be easy to add a task or revise a project specification. Before you know, teams feel overwhelmed, their to-do list spirals out of control, deadlines are impossible to meet, and deliverables seem out of reach.
Communication is key to marriage. Communication is essential. You must be able communicate clearly what is expected and required. Too many projects are based on timeframes that are too rigid for the business rather than being tailored to the capabilities of the teams.
Listening to your team is the first step. Is it difficult to manage a young family or have they been away for a while? Understanding the team’s circumstances will help you create realistic and achievable timelines.
There will always be delays, roadblocks, and challenges. Many people feel that they are not able to speak out when faced with challenges. This could be due to a culture of blame: if you raise a problem, you become the issue. Or simply because hierarchy gives the impression that you are less welcome if you are a junior member.
Your most valuable early warning system is often the staff who are involved in the daily execution of projects. It is important to create a culture that values everyone’s opinion and encourages collaboration.
Not all concerns can be solved. People can feel uncomfortable with change, especially when going through a period of intense transformation. Each member must be heard and addressed in order to ensure that new directions are supported.
This means understanding the motivations of each person, giving them the confidence and the opportunity to voice their concerns, and then working together to find common ground to help them adapt to the new direction. Transparency is key to this process. Understanding the why and how of an organisation’s direction change can help the less evangelistic members of the team get on board.
Projects are not in a bubble. There are many moving parts. External forces can often influence the team’s ability to control them. Transparency is the first and last line defense for project managers. Understanding the motivations of all stakeholders, internal and external, allows teams to see potential problems and allow them to plan and adapt.
The pandemic caused an increase in the use of collaboration tools, giving teams more structure and visibility into projects that were previously more ad-hoc. Requests can now be registered online to give teams more control and visibility. Where the latt?

The best project managers never ignore organisational politics

Every project is influenced by some form of organisational politics. Because all organizations are political in some way.
Politics and personal agendas of senior executives are often the top three factors that affect project success.
Many PMs are upset by the political shenanigans that have an impact on their projects. These feelings are understandable. Project management is hard enough. Most PMs can handle the extra work of dealing with politics and skulduggery from senior managers.
Experienced PMs are familiar with the challenge of winning supporters and friends at the top of an organisation’s power circles and table. It’s why smart PMs are able to foster trust and affinity among their stakeholder communities, especially at the top levels.
The best PMs understand that building and maintaining a strong network of supporters and cheerleaders among bigwig stakeholders is crucial. While technically sound PMs are often able to tackle the organisational hurdles associated with advancing their projects, average PMs are not.
The best PMs are not political animals. They recognize that the political landscape is an integral part of the organisational terrain. They have learned from experience that it is dangerous to ignore workplace politics. They are sensitive to organisational dynamics and political intrigues and incorporate these factors into their stakeholder management approach in ethical and healthy ways.
All PMs can benefit from our traditional knowledge and tools for managing stakeholders. They are often insufficient to manage the politics and power plays that ultimately determine the success of our project and the nature and quality of our organisational life. Life can be sweet for both us as PMs, and for our project team: We have a growing fan base at the top table, we are happy at work, we have credibility, and our projects move more smoothly. Life can be very frustrating if we don’t understand the organisational dynamics. Our project agendas can get stalled or hampered, stakeholder alignment can seem impossible, and getting investment funding approvals or project gateway approvals can feel like climbing Everest.
Organisational savvy requires us to look beyond traditional stakeholder management approaches. We have to understand the dynamics of power, politics and human idiosyncrasies that are at play in every organisation.
Sweet Stakeholder Love explains that stakeholders are not static beings. They are human beings. Humans are not like computers and light switches that can be turned on and off. We are not rational or able to behave in reasonable ways. We are complex beings with behaviours that can be idiosyncratic due to a combination of environmental, psychological, and personal influences.
Personal factors can sometimes influence our behaviours, such as age, gender, marital status and religious beliefs. Personality types, values, and attitudes are all psychological influences. The environmental factors include things like political orientations and financial or economic circumstances. It’s not surprising that handling some people can feel like you are dealing with jelly when there are so many influences.
We are also driven by many invisible and visible forces, such as our personal motivations, our struggles and problems, and the emotions that run through our soul at any given time. A stakeholder who is going through a bitter divorce might not be the most pleasant person to work with. A stakeholder with severe health issues or who is dealing with other traumatizing circumstances might not be a good fit.
Even more frustrating is the fact that you have to deal with it.

Part 3: Laravel Framework Setup on AWS Server and Other Essential Components

TABLE OF CONTENT
1. Overview2. URL Generation3. Session4. Validation5. Error Handling6. Conclusion7. CloudThat 8. FAQs1. Overview
Laravel is an open source PHP web framework that uses expressive and elegant syntax. Laravel’s built-in features include a variety compatible packages and extensions. This makes it one of the best options for building modern, full-stack web apps. In my previous blog, I gave you an overview of Understanding the Basic Components of Laravel Framework – Part 2.
This will be the final segment. We will cover topics like URL generation, accessing current URL, error handling techniques and many other topics.
2. URL generation
Laravel offers helpers that can be used to generate URLs for applications. These helpers are used to create URLs for applications. The application will handle the generated URL by using the ‘HTTPS’ or ‘HTTPS’ host.

Accessing The Current URL -If no path is provided to the URL helper, an Illuminate\Routing\UrlGenerator instance is returned, allowing to access information about the current URL
Singed URLs – Laravel allows us to easily create “signed URLs” for named routes. These URLs are signed with a hash that is added to the query. This allows Laravel to verify that it has not been modified since its creation. Signed URLs can be useful for routes that are publically accessible but require layered protection against URL manipulation.
URLs for Controller Actions – The action function generates URLs for the given controller actions. If the controller method accepts route parameter, you can pass an associative array with route parameters as the second argument.
Default Values -In certain cases, we can provide request-wide default values that correspond to specific URL parameters.
3. Session
Sessions allow you to store information about your user across multiple requests, since HTTP-driven applications do not have state. The user information is stored in a persistent store/backend which can be accessed by subsequent requests. Laravel offers a variety of session backends, which can be accessed via an expressive, unified API.
The application’s session configuration file can be found at
config/session.php1config/session.phpBy default, Laravel is configured to use the ‘file’ session driver, which works well for many applications. If the application is load-balanced over multiple web servers, then we should choose a central store that all servers can access.
The session driver configuration option determines where session data will be stored for each request. Laravel comes with several great drivers:
File – sessions are stored instorage/framework/sessions1storage/framework/sessions
Cookies are encrypted and stored securely in secure cookies
Database – sessions are stored within a relational database
These cache-based stores are fast and store Redis / Memcached sessions
Dynamodb – Sessions are stored in AWS DynamoDB
Array – Sessions are stored in a PHP array, and will not be persistent
4. Validation
Laravel offers several ways to validate incoming data. It offers a variety of validation rules that can be applied to data. Even the ability to validate unique values in a particular database table.
Defining the Routes -The application’s routes are defined in theroutes/web.php1routes/web.phpfile. The GET route will display a form to create a new post. The ‘POST route will store the blog in the database.
Writing the Validation Login. This is the login that will validate the new blog post. We can use the IlluminateHttpR ‘validate’ method.

Kubernetes Cluster Using Microk8s On Ubuntu

TABLE OF CONTENT
1. Introduction2. VirtualBox 3. VirtualBox 4. Microk8s are set up on Ubuntu 20.045. Clustering Microk8s Instances6. Microk8s Dashboard Setup7. Microk8s8 EFK Stack Configuration Conclusion 9. Conclusion 9. Introduction
Microk8s can be used to automate containerized application management, deployment and scaling. It is a CNCF-certified Kubernetes upstream Kubernetes deployment, which runs entirely on our workstation.
Learn more about 8 Key Attributes Modern Cloud-Native Architecture
It runs all Kubernetes services directly and packs the entire library. MicroK8s does not require a virtual machine. This is in contrast to tools like Minikube which spins up a local machine for the Kubernetes cluster. This feature also has its flaws. Microk8s needs Linux (distributions support snap), while tools such as minikube provide support. If we want to deploy microk8s on non-Linux operating systems, we will need to install Linux on top.
This is a Beginner’s Guide to Kubernetes with Real-Time Examples.
2. VirtualBox: Setting up
We use VirtualBox to install Ubuntu 20.04 on top Windows 10. You can either use an existing Ubuntu machine, or you can spin up a Ubuntu server on the cloud.
You can access the VirtualBox downloads page via a browser. Click on “Windows Hosts” to start the download of the installer file. Start the installation by selecting the installer file from your browser downloads folder.
Follow the instructions on the screen to complete the installation. Give the appropriate access permissions. Click Finish to open Oracle VM VirtualBox Manager
3. VirtualBox: Ubuntu 20.04 Set-up
Visit the Ubuntu downloads page from a browser: https://ubuntu.com/download/desktop and click ‘Download.’ Save the file
Open Oracle VM VirtualBox Manager, and click on ‘New.
Follow these steps:* Name your VM. * Select the VM Machine Folder to save all files. * Select Type as ‘Linux.’ * Click on Next.
Choose the right RAM for your virtual machine. Ubuntu recommends 4GB RAM. Make sure there is enough RAM for the host system’s processes. Click Next.
Select ‘Create virtual hard disk now. Click Create
Select the default Virtual Hard drive file type. Other options are also available. Click Next
Select ‘Dynamically Allocated’ as the storage option for your virtual hard disk. Next
Choose the file location and file size. Ubuntu recommends that you have 25 GB free hard drive space. Click ‘Create.
Select the configured VM (Ubuntu20.04 for our case) from the main window of the ‘Oracle VM VirtualBox Manager’. Click ‘Settings
Click on Storage>. Select the ‘Optical Drive’ icon on Controller: IDE
Click Add to select the downloaded Ubuntu 20.04 iso file. Click Open and choose Choose. Click OkOptionally you can adjust the display settings to increase video memory. If you experience significantly slower internet speeds inside the VM, you can also go to Network settings to modify Network Adapter Attached To Bridged Adapter.
Click on Show to select the virtual machine. Click on Show. Wait for the disk scanning to be completed before proceeding with the installation.
Click Install Ubuntu
Select the keyboard layout. Click on Continue. Select ‘Normal installation’ or ‘Minimal installation’. You can also tick the boxes to download updates and install third-party software. Continue
If you wish to create multiple partitions, click on ‘Erase disk and install Ubuntu’. Click Install N

Kubernetes Cluster Using Microk8s On Ubuntu

TABLE OF CONTENT
1. Introduction2. VirtualBox 3. VirtualBox 4. Microk8s are set up on Ubuntu 20.045. Clustering Microk8s Instances6. Microk8s Dashboard Setup7. Microk8s8 EFK Stack Configuration Conclusion 9. Conclusion 9. Introduction
Microk8s can be used to automate containerized application management, deployment and scaling. It is a CNCF-certified Kubernetes upstream Kubernetes deployment, which runs entirely on our workstation.
Learn more about 8 Key Attributes Modern Cloud-Native Architecture
It runs all Kubernetes services directly and packs the entire library. MicroK8s does not require a virtual machine. This is in contrast to tools like Minikube which spins up a local machine for the Kubernetes cluster. This feature also has its flaws. Microk8s needs Linux (distributions support snap), while tools such as minikube provide support. If we want to deploy microk8s on non-Linux operating systems, we will need to install Linux on top.
This is a Beginner’s Guide to Kubernetes with Real-Time Examples.
2. VirtualBox: Setting up
We use VirtualBox to install Ubuntu 20.04 on top Windows 10. You can either use an existing Ubuntu machine, or you can spin up a Ubuntu server on the cloud.
You can access the VirtualBox downloads page via a browser. Click on “Windows Hosts” to start the download of the installer file. Start the installation by selecting the installer file from your browser downloads folder.
Follow the instructions on the screen to complete the installation. Give the appropriate access permissions. Click Finish to open Oracle VM VirtualBox Manager
3. VirtualBox: Ubuntu 20.04 Set-up
Visit the Ubuntu downloads page from a browser: https://ubuntu.com/download/desktop and click ‘Download.’ Save the file
Open Oracle VM VirtualBox Manager, and click on ‘New.
Follow these steps:* Name your VM. * Select the VM Machine Folder to save all files. * Select Type as ‘Linux.’ * Click on Next.
Choose the right RAM for your virtual machine. Ubuntu recommends 4GB RAM. Make sure there is enough RAM for the host system’s processes. Next.
Select ‘Create virtual hard disk now. Click Create
Select the default Virtual Hard drive file type. Other options are also available. Click Next
Select ‘Dynamically Allocated’ as the storage option for your virtual hard disk. Next
Choose the file location and file size. Ubuntu recommends that you have 25 GB free hard drive space. Click ‘Create.
Select the configured VM (Ubuntu20.04 for our case) from the main window of the ‘Oracle VM VirtualBox Manager’. Click ‘Settings
Click on Storage>. Select the ‘Optical Drive’ icon on Controller: IDE
Click Add to select the downloaded Ubuntu 20.04 iso file. Click Open and choose Choose. Click OkOptionally you can adjust the display settings to increase video memory. If you experience significantly slower internet speeds inside the VM, you can also go to Network settings to modify Network Adapter Attached To Bridged Adapter.
Click on Show to select the virtual machine. Click on Show. Wait for the disk scanning to be completed before proceeding with the installation.
Click Install Ubuntu
Select the keyboard layout. Click on Continue. Select ‘Normal installation’ or ‘Minimal installation’. You can also tick the boxes to download updates and install third-party software. Continue
If you wish to create multiple partitions, click on ‘Erase disk and install Ubuntu’. Click Install N

Introduction to Serverless Computing

An organization’s conventional approach to creating an IT environment/infrastructure by getting hardware and software resources individually has become outdated. There are many ways to virtualize IT systems, and access the required applications via the Internet using web-based applications.
Cloud computing is very popular today. There are many cloud service providers on the market and there are many questions about which one to choose. Before you dive into Serverless Compute, refresh the memory with some cloud concepts.
Introduction to Serverless Computing
Serverless computing is a cloud computing code execution model in which the cloud provider handles the functioning/operations of virtual machines as needed to fulfil requests, which are billed by an abstract calculation of the resources required to satisfy the request rather than per virtual machine, per hour. It does not allow code to be executed without servers, despite the name. The term “serverless computing” is derived from the fact that the system’s owner does not need to rent, buy, or provision virtual machines or servers on which the back-end software runs.
Why serverless computing?
Serverless computing is more cost-effective than renting or purchasing a fixed number servers. However, this can also result in long periods of underuse and idle time.
A serverless architecture also means that developers and operators don’t have to spend time setting up or tuning autoscaling systems or policies. The cloud provider will scale the capacity to meet the demand.
These systems are known to be elastic rather than scalable because the cloud-native architecture is able to scale down and up in its entirety.
The units of code revealed to the outside world with function-as-a-service are basic event-driven functions. This eliminates the need to think about multithreading and explicitly handling HTTP requests within their code. It simplifies the task of backend software development.
Top Serverless Computing Tools
1. AWS Lambda
AWS Lambda was the first serverless computing tool introduced in 2014 popularly known as Function-as-a-Service or FaaS.
AWS Lambda allows you to run code on a serverless computing platform that doesn’t require you to manage servers, provision or manage servers, create workload-aware cluster scaling logic and manage event integrations.
Benefits:
AWS Lambda runs your code with no need to maintain servers. Simply write the code, and then upload it as a ZIP or container image to Lambda.
Continuous scaling: AWS Lambda automatically scales your application by running code in response each occurrence. Your code runs in parallel, processing each trigger individually, scaling to the workload’s size, from a few requests per hour to hundreds of thousands every second.
AWS Lambda’s millisecond metering reduces costs. You only pay for what you use of the computing time. This means you don’t have to overpay for infrastructure. You will be paid for every second your code runs, as well as for the number of times it is triggered.
Consistent output at all scales: AWS Lambda allows you to reduce the time it takes your code to run by choosing the right memory size for your function.
How it works

Source: docs.amazon.com
2. Azure Functions:
Azure Functions is a serverless computing platform which allows you to write less code and manage fewer resources. It also saves money. Instead of worrying about managing servers, the cloud infrastructure provides all the tools needed to maintain the application’s functionality.
Azure Functions will take care the rest. Focus on the code that is most important to you.
Systems are often designed to respond to a series of critical events. Any program, regardless of whether it’s creating web APIs or reacting to data, will need to be able to do this.

Introduction to Serverless Computing

An organization’s conventional approach to creating an IT environment/infrastructure by getting hardware and software resources individually has become outdated. There are many ways to virtualize IT systems, and access the required applications via the Internet using web-based applications.
Cloud computing is all the rage right now, and there are many cloud service providers on the market. This makes it difficult to decide which one to choose. Before you dive into Serverless Compute, refresh the memory with some cloud concepts.
Introduction to Serverless Computing
Serverless computing is a cloud computing code execution model in which the cloud provider handles the functioning/operations of virtual machines as needed to fulfil requests, which are billed by an abstract calculation of the resources required to satisfy the request rather than per virtual machine, per hour. It does not allow code to be executed without servers, despite the name. The term “serverless computing” is derived from the fact that the system’s owner does not need to rent, buy, or provision virtual machines or servers on which the back-end software runs.
Why serverless computing?
Serverless computing is more cost-effective than renting or purchasing a fixed number servers. However, this can also result in long periods of underuse and idle time.
A serverless architecture also means that developers and operators don’t have to spend time setting up or tuning autoscaling systems or policies. The cloud provider will scale the capacity to meet the demand.
These systems are often referred to as elastic rather than scalable because the cloud-native architecture is able to scale down and up in its entirety.
The units of code revealed to the outside world with function-as-a-service are basic event-driven functions. This eliminates the need to think about multithreading and explicitly handling HTTP requests within their code. It simplifies the task of backend software development.
Top Serverless Computing Tools
1. AWS Lambda
AWS Lambda was the first serverless computing tool introduced in 2014 popularly known as Function-as-a-Service or FaaS.
AWS Lambda allows you to run code on a serverless computing platform that doesn’t require you to manage servers, provision or manage servers, create workload-aware cluster scaling logic and manage event integrations.
Benefits:
AWS Lambda runs your code with no need to maintain servers. Simply write the code, and then upload it as a ZIP or container image to Lambda.
Continuous scaling: AWS Lambda automatically scales your application by running code in response each occurrence. Your code runs in parallel, processing each trigger individually, scaling to the workload’s size, from a few requests per hour to hundreds of thousands every second.
AWS Lambda’s millisecond metering reduces costs. You only pay for what you use of the computing time. This means you don’t have to overpay for infrastructure. You will be paid for every second your code runs, as well as the number times it is triggered.
Consistent output at all scales: AWS Lambda allows you to reduce the time it takes your code to run by choosing the right memory size for your function.
How it works

Image Source: www.docs.amazon.com
2. Azure Functions:
Azure Functions is a serverless computing platform which allows you to write less code and manage fewer resources. It also saves money. Instead of worrying about managing servers, the cloud infrastructure provides all the tools needed to maintain the application’s functionality.
Azure Functions will take care the rest. Focus on the code that is most important to you.
Systems are often designed to respond to a series of critical events. Any program, regardless of whether it’s creating web APIs or reacting to data, will need to be able to do this.