Business

The Best of AWS re:Invent

Link to webinar: The Best of AWS re: Invent

AWS re:Invent, hosted by Amazon Web Services, is an elite conference for the global cloud computing community. The event includes the latest in innovation, current trends, in depth information on the cloud computing industry, and the launch of several new products and technical features.

ATC was lucky enough to attend the festivities and soak up all the knowledge and unique experience so that we can stay up-to-date on cloud computing and continue to offer our customers the highest level of expertise in the cloud computing space.

Interested in becoming a certified SAFe practitioner?

Interested in becoming a SAFe certified? ATC’s SAFe certification and training programs will give you an edge in the job market while putting you in a great position to drive SAFe transformation within your organization.

Glossary

  1. Containers & Kubernetes
  2. Data & Analytics
  3. Machine Learning & AI
  4. Security
  5. Edge Computing

Download the slide deck here.

Introduction

Kelsey Meyer: Hi everybody. We have Nick Reddin and Satya KG here with us today. Satya is our cloud wizard here at ATC. He attended the AWS re:Invent, and he’s going to tell us the best parts about it. We’re really looking forward to that. Before we get started, I’m going to give you guys a short introduction of each of our folks here. 

Nick Reddin is our Vice President here at ATC with 25 years of experience in technology working with Fortune 500 companies. He specializes in innovation, sales, and change management. 

Satya is our Solution Lead for Cloud here at ATC, which most of you already know is one of the many services that we offer. Satya has 15 years consulting startups and mid to large enterprise companies in software engineering and specifically cloud infrastructure. He specializes in AWS, which is Amazon Web Services, and Google Cloud. He’s attended this conference three times. This year he really knew what to expect and he was able to mark down the key takeaways that we’ll discuss today. 

A little bit about ATC for those of you who don’t know us. We are a business solutions company helping clients bridge the technology and process gap in order to accelerate their growth. Now that can mean a million different things in a million different ways. We’re a business solutions company, and we help people with their technology and let them scale. That’s everything that we do, whether it’s from cloud, RPA, staffing, all kinds of stuff. Today we’re going to focus specifically on cloud. On that note, I’m going to go ahead and hand it over to Nick. 

Nick Reddin: Great. Thank you, Kelsey. I appreciate it. Thank you for the introduction as well. Today we’re going to talk about the AWS re:Invent Conference. This conference is massive. It’s easily one of the biggest conferences in the world. We had Satya attend that conference and we’re able to pull back what I think are some really good nuggets. It’s very interesting about the changes that are coming in the next year. It’s really impossible to get a grip and a grasp on everything that takes place there. They have over 2,500 sessions during the conference, which is just a mammoth of content and opportunity to learn. If you’ve never gone, we encourage you to go. What we tried to do was summarize and bring back what we thought were some of the more interesting pieces of it that will help you. 

If you did go, you may learn some of the things from courses you may not have been able to attend. One of the things we also want to ask is that you submit questions. You can submit those as we go, and we will either try to answer them as we go depending on where we’re at in the presentation, or we’ll definitely answer them at the end. We’ve got quite a bit of content here we’re going to talk about and subjects that we’re going to cover. Our overall agenda for the presentation is containers, Kubernetes, data analytics, machine learning, and AI security. Then, of course, edge computing. Satya is going to take over from here, and then we’ll start to address this as we go. So Satya, take it away. 

Satya KG: Thanks everyone for joining today’s webinar. We are going to go over a bunch of these areas. One of the hot areas is containers and Kubernetes. Over the last few years we have seen the developer shift in terms of deploying applications on bare metal to virtualizing their applications. Now containers have kind of surveyed. AWS has been placing a lot of focus on containers and Kubernetes. In the past they had services like an elastic container service. They had an elastic Kubernetes service, and this year they decided to add a couple more features.

Interested in being a speaker for one of our webinars? Let’s talk!

New Features for Containers and Kubernetes

One of the new features is the Amazon Elastic Kubernetes service with support for Fargate. Fargate makes it very straightforward to run Kubernetes-based applications so that it eliminates the need to provision and manage these applications. 

Fargate is a serverless computing environment that allows developers to scale their applications. The beauty about Fargate is that customers do not need to be experts in Kubernetes operations to run a cost optimized and highly available cluster. Fargate also eliminates the need for customers to create and manage EC2 instances for their EKS cluster rate. Customers no longer have to worry about patching, scaling or securing a cluster of a large amount of EC2 instances, which have Kubernetes applications running on the cloud. This makes it very easy for developers to right-size resource utilization for each application and allow customers to see the cost of each bar that’s running within the Kubernetes cluster. 

The next service that we are going to talk about is the ECS cluster auto scaling. ECS as a service has existed for over three years. ECS clusters have come into existence since then. But the auto scaling piece of it is kind of a new feature. It enables you to have more control of how you scale tasks within a very specific cluster. For example, each cluster has its own capacity providers. The default optional capacity provider is set as a manual consideration by the system administration or the developer. Today developers have to go back and elevate provisions of configuration to auto scale the cluster by enabling the easiest. Cluster auto scaling is a feature where you can default the cluster’s capacities so that it supports the tasks or services that you run. 

The third interesting feature in the containers’ and Kubernetes’ ecosystem is something called a capacity provider. Capacity providers are a new way to manage compute capacity for containers. They tell the application to define the requirements about how to use the capacity. Think of it like a set of flexible rules of how these containerized workloads run on different types of compute capacity and how you manage the scale of this capacity. Capacity providers allowed developers to improve the availability, scalability, and the cost of running tasks within the ECS environment itself. There are statements that close to 70% of the production of Kubernetes’ workloads that are running today run on Amazon ECS and the number of containerized workloads that are going through the migration has been growing at a rate of 200% year on year. There’s a lot of room for containers and Kubernetes. 

Nick Reddin: We know Kubernetes is growing like crazy. There’s a lot of demand. We see it ourselves in companies wanting to deploy more in the cloud with Kubernetes. How much of that do you think is going to continue to grow over the next two years? 

Satya KG: That’s a great question Nick. I think if you look at the past 10 to 15 years, virtualization took almost 10 to 15 years to emerge. Having customers run with today’s workloads in the data centers took almost 10 to 15 years. But what we are seeing with the container option, especially with the web scale companies and some of the fast growing companies, is that the run is going to be much shorter. So it’s very fair to say that we don’t have to wait for a 10 to 15 year window like a virtualized environment. I think the containerization and the entire queue is going to see a very rapid adoption in the next two to three years. We have also seen a lot of traditional companies and environments, for example banking and healthcare, adopt containers purely for two reasons. One is the oral cost of operations and the second thing is to improve their customer experience because their infrastructure costs become lower, and it gives them a better option to run workloads at scale. 

Is Kubernetes going to change the Cloud market share?

Nick Reddin: We’ve got the big three providers out there. Obviously Amazon has definitely had the lion’s share of the market, not just with Kubernetes, but cloud in particular. Do you think any of that is going to change? There’s a lot of speculation as it relates to Kubernetes and if it’s going to give Google a leg up on Amazon or if it’s going to give Microsoft a leg up on Amazon. Do you think there will be any changes or do you think Amazon will continue to own the space? 

Satya KG: What we have really seen is while Kubernetes itself originated from Google as a  native project we have seen kind of more pick up and run from AWS and Azure. In fact, I have seen that Azure has been more aggressive in the overall Kubernetes execution space by adding their own value added offerings on top of the queue project. 

Unfortunately, Google wasn’t able to capitalize as much on the Kubernetes offering, but it looks like AWS and Azure are really aggressive in terms of launching their own offerings. Surprisingly, we have also seen players like VMWare, which recently bought Pivotal, kind of double down with their Pivotal Kubernetes service and Pivotal container service. It looks like there’s going to be a lot of momentum and it’s not just the cloud base but also the traditional virtualization providers. Infrastructure providers become more aggressive in the container and the Kube space. 

Nick Reddin: Do you think with VMware in particular, who has done a lot in the last six months really reshaping and re-imagining their business and pivoting to Kube, that was really just to save their name and to stay in business? 

Satya KG: Not necessarily. It’s a fundamental shift that’s happening in the technology world because people are now realizing, for example, Google is a company that has seen web scale. They were the company that runs millions and millions of requests per second with less infrastructure. And it was only possible because of containerization and Kubernetes. So it’s a big technological shift that’s happening where there was bare metal, there was virtualization, and now there are containers and Kubernetes. 

What we are seeing is players across the spectrum, whether they’re infrastructure providers, whether they are virtualization providers or independent software vendors. Everyone is trying to catch this wave of containerization. So it’s a big fundamental shift in terms of how applications are developed and deployed. It looks like they definitely don’t want to miss out on this wave as well. But it’s a big pivot that they have been undergoing as well, yes. 

Nick Reddin: It seems like it’s going to be an exciting year for all of this. 

Satya KG: One of the hot areas that a lot of customers speak about is the older data and analytics. While data itself isn’t really new, the way data is captured and the velocity of the data today that some of these companies are seeing and how to persist are. Not just process the data, but see how to make sense out of it. So data and analytics play a very crucial role for many companies. 

What is Redshift Federated Query?

Satya KG: Surprisingly, one of the fastest growing products within AWS is this product called Redshift. Redshift is like a data warehouse that supports micro workloads. Redshift has launched a lot of features over the years. One of the very interesting features that came back from a lot of customer feedback and loops is its ability to do federated query. 

A federated query is a very interesting feature that allows the user to query and analyze data across operational databases, data warehouses and data lakes. Redshift originated as a data warehouse in itself. Now the developers and admins have the ability to integrate queries on live data that’s going on in Amazon RDS or Amazon Aurora so that you can run your queries both on Redshift and parallel to any other relational databases that you have posted there. The beauty about this as part of the BA and reporting today customers are saying, “Hey, great. I really want to put all my data in a warehouse and then report on top of it.” 

But what they’re also realizing is that the level of time it takes for data to come to a warehouse and analyze on top of it, that’s mostly like a batch base environment. So it’s not near real time. Customers want to see insights in real time. If they want to see insights in real time then they need to have the ability to query the data across multiple data stores. That is why this feature of federated query is very powerful. 

We all know that Redshift has its own massively parallel processing capabilities, but what federated query allows you to do is ingest data into Redshift and query operational databases. You can apply transformations on the fly and build and deploy data without having complex ETL pipelines. 

Nick Reddin: I’ve been seeing a lot of talk about this as well. So this is similar to what Splunk was kind of doing, right? 

Satya KG: Yes, exactly. You provide a very interesting point, Nick, because if you look in the past there were database houses like Teradata, HP Vertica, et cetera. Unfortunately these data warehouses have very strong storage and compound  bound to them. So if you want to run more queries or something, you have to add more nodes and it was not scalable. So with tools like Redshift, which is where you can build the compute and the storage where it is completely elastic, you can run millions and millions of queries at any point. Then you can also process any kind of data, whether it is relational data, columnar data or time series data, you can process all of them into Redshift. What we have also seen is that it is not just Redshift from AWS, but other products, like Ubiquiti from Google or Azure data warehouse, all of them have been growing phenomenally. Customers realize that they need a way to process all the data and derive insights from it. 

Nick Reddin: For Redshift, I think It’s really going to be a good boon for them as far as their offerings overall. 

What are Elasticsearch Service and Amazon EMR?

Satya KG: So one of the other interesting services is Amazon Elasticsearch Service, which is a managed elastic. They have something called Ultrawarm. Ultrawarm is a performance-optimized storage tier. It allows you to store and interactively analyze your data using Elasticsearch and Kibana so that it reduces your cost per gigabyte up to 90% or existing Amazon Elasticsearch hot storage options. Today if you use the Amazon Elasticsearch Service and run any kind of query you still have to pay on a cost per ZB. However, with Ultrawarm this is more like a performance-optimized, warm storage tier, so it would reduce your overall query and analysis cost. 

Last but not least, EMR as a service has been very popular and a lot of Amazon customers have been saying, “EMR is really great, but can we replicate EMR in our own data center?” This is a very interesting proposition because there are a lot of solutions like Kafka, Pub/Sub, and a couple of other Pub/Sub mechanisms that customers have been using within their own data center environments. 

EMR has been a very successful service. So customers have been asking, “How do we make it run within our own data centers?” So that is when Amazon launched this service. EMR is now available in data centers using the outpost service which we are going to talk about later. The beauty about this is you can create the EMR cluster on premises using your AWS console or command line and the clusters will appear within your outpost. The biggest advantages that they give is it allows you to augment on-premise processing capacity. It allows you to process data that needs to remain on-prem so that, for example, if you have data sets that you always want to process and persist on prem, you can continue doing that by using EMR. Most importantly, you can also house data and workload migrations should the customer choose at a later point of time. 

Nick Reddin: Machine learning and AI obviously are huge. I know these are going to be some really good topics, but just as a reminder to our audience as well, if you have any questions about any of the things that we’ve talked about so far, please feel free to submit those at any time. We’ll either try to answer them as we go or we’ll definitely catch them at the end. 

Satya KG: I’m sure you know that machine learning and AI is a very hot topic. In fact, most of the AWS services that were announced, a significant portion of them featured machine learning and AI. It’s no surprise that it continues to attract the attention of the entire AWS re:Invent audience. While the spectrum has been tooling around machine learning, AI is relatively new. What AWS has been trying to do is double down on some of the key tooling that will improve the experience for machine learning engineers and other audiences that have to play around with machine learning models. 

What are Sagemaker’s new features?

Satya KG: Earlier, machine learning used to be confined to machine learning engineers. The tooling used to be slightly complex. They are coming out with a lot of tooling and knowledge to bring that level of expertise to an ordinary audience as well so that there can be non-developer audience, like business folks or admins or product managers who should be able to use SageMaker to deploy machine learning models. Sagemaker is kind of their integrated studio. They launched a lot of new features. For example, they have launched Experiments, Debuggers, Model Monitor, and Autopilot. Let’s look at what each of them really mean. 

Sagemaker Experiments is a new capability that lets you organize, track and compare, and evaluate your machine learning experiments and model versions. Debugger allows you to automatically identify complex issues developing in your ML training jobs. Model Monitor is more like an application performance monitoring tool that automatically monitors your machine learning models in production. Think of it as a one-time monitoring of your machine learning model. It alerts you whenever there are issues in the data quality, data pipeline or in terms of feature engineering. Think of it as a performance management tool. 

Autopilot is an interesting feature. It’s almost like SageMaker is using its own AI capabilities to automatically create and pick the best classification and regression machine learning models while allowing the user to have control and visibility. What SageMaker is really evolving into is an end-to-end workbench or a platform that allows people to create these models, run experiments or debug the models monitor these models in production at runtime to see if there are any data issues or feature engineering issues. You can also autopilot SageMaker should you wish to. 

How does CodeGuru work?

Satya KG: Another interesting service that came out is AWS CodeGuru. Codeguru is like a managed service. For a long time developers had to rely on writing their own code and getting their code reviewed by their peers. Peers would make comments to the developers. It’s an iterative process. So imagine a service that’s going to tell you every line of code that you write and keeps giving you best practice recommendations. Codeguru is that service that helps developers proactively improve code quality through error-driven recommendations. The whole service comes with a reviewer and a profiler that can detect and identify issues in code. An example would be Amazon CodeGuru can review and profile Java code targeting the JVM and z/VM so developers can continuously use it to improve their application performance. You no longer require the peer reviews or manager reviews. 

Why is Amazon Kendra a useful managed service?

Satya KG: The other interesting service is Kendra. Amazon has launched Kendra, which is a managed service that brings contextual search to applications. The contextual search is very relevant. For a long time we have seen various solutions for enterprise search, but with contextual search you can pass documents stored in a variety of mediums. For example, a lot of organizations have files stored in Box, Dropbox, Salesforce, SharePoint, et cetera. If you want to contextual search within those specific files or specific data from a third party service, Kendra allows you to do it. For example, I might be operating in a customer support ticket system, but I want to search for data that is in Salesforce, or I don’t want to search for data or some onboarding documents that’s available on SharePoint, et cetera. I can search directly from the customer support system without having to leave, go to a third party application, and then do it. It provides contextual search for various data sources used by the enterprise from anywhere. 

Nick Reddin: It sounds like AWS has a huge ecosystem of partners out there that seem to have been acting as a third party application. Like you just said some of these features are replacing some of their ecosystem with their own platforms and their own tools. Does that seem to be what’s taken place? 

Satya KG: Yes, I think that’s a fair statement. Ultimately it’s up to the customer to opt for what is the right choice. The customer has to decide what fits their needs best. If you look at the file sharing system, there is Box and Dropbox. Or SharePoint. Customers can choose to pick any of these solutions. I think it’s always going to be a competing market. I would say Amazon is going to compete with other independent service providers. A very simple example is application performance monitoring. You can use AppDynamics or New Relic, or you can use AWS CloudWatch, but a lot of customers pick AppDynamics and New Relic because they know a very focused application with a very deep capability might serve them better, whereas for an entry point solution, I can use AWS CloudWatch. It’s really up to the customer. The market is becoming more competitive. Customers will always have their say in terms of what to pick for them. 

Nick Reddin: That’s a great point. Competition makes everybody better typically. 

Top Security Services from AWS re:Invent

Satya KG: I think one of the hot areas that’s coming out from the heels of machine learning and AI is security. That’s kind of on top of everyone’s mind. There are a lot of services that were announced around cloud security and a lot of partner offerings. How to monitor your instances, how to collect data from your instances, how to ensure your customer data or personal, identifiable information is not stored on any of the data stores or instances et cetera. 

Some of the things that really stood out were Amazon Detective, Amazon Nitro Enclaves, and IAM Access Analyzer for S3 Access Points. Amazon Detective is an interesting service because it allows you to investigate and identify potential security issues faster. It collects the log data from all the AWS services and resources you have been using. It uses machine learning to identify the issues and probably also alert. In fact there’s also a self remediation capability within Detective itself that allows you to identify those security issues and auto remediate itself without human intervention. It’s really up to the configuration of the administrator to decide whether they let Detective run on autopilot or do they want to intervene for each of the security issues that’s kind of broader framed. 

Nitro Enclaves is an interesting proposition. It was highly sensitive data and by partitioning the compute and memory resources allocation within a particular instance to create an isolated compute environment. This is very useful because a lot of what we have seen with the recent California Consumer Protection (Privacy) Act and earlier with the GDPR, et cetera. 

There are some environments where, personally identifiable information is needed. It needs to be treated very carefully in healthcare, finance, and other verticals. Nitro Enclaves allows you to use the same hypervisor technology that allows you to isolate both the compute and memory resources within a particular instance and allows you to post this data very carefully. 

Nick Reddin: This is a really good one. We know this is something customers have been asking for that we’ve heard internally as well as with our own clients. What industries in particular do you think are really going to benefit from this? 

Satya KG: I think what we’re essentially seeing is regulated industries, such as financial services, healthcare, wherever there is a lot of consumer data that needs to be persisted. Consumer data that needs to be more secure. For example, you can have all the e-commerce data about the customer, which is still important, but it’s not as important as social security. There is a lot more sensitive information about current history or something similar and where it needs to be more regulated. I look at this as a best use case. 

We are also seeing what every customer has been looking for. Wanted options include to have a control on the customer data available in research and, most importantly, during the transmission. Also, customers are interested in the data for which they would want to have a thorough amount of security or encryption for. 

Last but not least is IAM Access Analyzer for S3 Access Points. It has existed for a very long time. What happens is within the IAM access you can create roles and you can assign individuals within the organization. Unfortunately, granting access to an external principle that is not within the zone of trust has always been a big challenge. That’s where S3 Access Points really came through. What happens is Access S3 alerts the admins so that the three buckets are properly configured to allow access to anyone on the internet and other AWS accounts that they handle. 

Sometimes because these are kind of a distributed file store or an object store. Sometimes you would want to give permission. A very simple example is a company like Ford that might want to share files with their suppliers or want to collaborate with some other suppliers downstream. Basically you can create a bucket. You can set access permissions on those buckets, and you can define who would have read-write access through an access control list. 

Nick Reddin: It seems from our own customers and the companies that we’ve been working with that governance is a really big issue for companies around their cloud instances. A lot of them have either no governance or very light governance at best. If I’m not wrong, AWS is really stepping up to help companies have better governance over their access points. 

Satya KG: Yes. That is very much an apt statement because companies have kind of matured enough on how to do access governance for their physical infrastructure and how to do it for their applications. Most of it was the applications and the structure within their control. They were running the applications in their own data centers, so they had more control over it. 

Unfortunately, now the cloud has spread across regions. You might have infrastructure or applications on the East Coast, West Coast, et cetera, and you might have applications running anywhere around the world. Somehow that the loss of control is something that needs to be compensated. The better governance model, which is what everyone is trying to improve. The Cloud’s Governance model is something that both organizations, and even the cloud vendors themselves, are going through a maturity curve. It’s very fair to say that in a couple of years, the Cloud Governance model will standardize, like your ISO, and then it would be uniform across the board. Right now I would say everyone is going through that maturity. 

Nick Reddin: That’s good. That’ll make our jobs a lot easier. That’s always one of the first things we have to do with a lot of the companies is help them with their governance. 

Satya KG: Yes. I think the other interesting thing that is coming out is edge computing. A lot of the \audience was familiar with the cloud computing model. A lot of the audience was unfamiliar with edge and thought, “What is this edge computing about and what is it going to do for me?” Three things that stood out from the edge computing and announcements were AWS Outpost, AWS Local Zones, and AWS Wavelength. 

What is edge computing?

Nick Reddin: For our audience that may not know what edge computing isthis is the cutting edge of things that are taking place now. Even people in the business don’t really seem to be able to understand it very well. So for those listening, what is edge computing? 

Satya KG: What essentially happened is the cloud computing model involved a two phase model. So earlier, people used to have data centers. Now they have these public cloud providers. These public cloud providers have regional centers. These regional centers are spread across the U.S. and other parts of the globe. Let’s say you’re an online service. Your customers can be in any part of the U.S. That doesn’t necessarily mean they are very closely located to their regional center where this public provider that has an office. So the user can experience some latencies, some other application challenges. What really is happening is that this cloud base is thinking, “Now we have taken the cloud to the real cloud in the cloud, but now we need to bring the cloud closer to the user.” 

So bringing the cloud closer to the user means you need to shrink the data center within the public cloud to bring it much closer to the user to give them a better experience. So that’s what edge computing is about. Edge computing is about eventually providing a better access mechanism for the user in whichever location they are, so that they can have a better experience. 

How do we make it happen? We need to bring both the compute, network, and storage much closer to where the user is so that the application, which runs on this environment, can give a better experience to the user. That’s a high level look at what edge computing is. On a different note, edge computing also is customers who moved to the cloud, but they still have some workloads where they can never move the workload, but they want to see the benefits of the cloud scale within their own data center environment. 

What are some AWS edge computing services?

Satya KG: Some of this audience has hybrid, cloud et cetera. Basically the ability to manage workloads, some of it within your data center and cloud. So now some of these cloud provider guys are taking all the cloud capabilities, putting it in a box, and giving it to the customer so then they can use it in the data center. For example, AWS Outpost, the first service that we’re talking about right now, lets you rent AWS to run within your own data center. Think of it like AWS in a box where you can launch EC2 instances. You can use the same set of tools, like AWS Console or CloudFormation Template.

In the previous slides we touched base about how you can run EMR jobs on Outpost itself. A lot of these AWS services they’re serving on a public cloud. Now customers can run them within their own data center. That’s what Outpost is essentially for. Customers can rent hardware appliances from AWS and those appliances come with the cloud built in. 

The next set of services in relation to the edge computing is AWS Local Zones. Local zones make your cloud hyperlocal by bringing the entire compute, storage, and network services to a user within a city. You have these regional centers, which are large data centers that are spread outside of the cities, but unfortunately for an object like a self driving car that requires much closer compute and a better network capability, it cannot do that round trip that far. By having a Local Zone you have a geography proximity for end users. Developers can choose to deploy applications, whether they want to deploy it on an availability zone, a regional zone or a local zone. 

The last one is actually called the Wavelength Zones. Wavelength Zones are an infrastructure deployment. Think of it like a network deployment which binds the compute and storage with the telecommunications provider’s network. In this aspect, Amazon has partnered with Verizon. Verizon is rolling out 5G across the U.S. starting in Chicago next month. Wavelength brings the power of AWS cloud to the edge. Any latencies and still use cases, imagine that they are coupling with a strong 5G provider. They’re able to bring storage and compute. AWS is bringing the storage and compute, and the vendor is bringing the network capability. A combination of these gives a better experience to the end users, almost like real time responses. This is part of the local zone. This is a subject of the local zone capability. Here we are blending all of the best models by bringing a very powerful network in 5G. We are bringing the storage and compute capabilities from AWS. 

Nick Reddin: So one of the things over the years is AWS, because they were the first, gets picked on a little bit between the other providers that are newer for their latency issues with where some of the customers are located versus where their data centers are located. It sounds like they’re really trying to make that complaint go away with all these edge services. Is that a fair statement? 

Satya KG: That’s a fair statement. The other way to look at is of course there will be content instilled in applications that users need to access faster. I don’t think if it takes a few milliseconds that users are complaining, but what’s really driving this trend is IoT. For example, smart meters self-driving cars need to be constantly connected to the network, constantly be processing data. Unfortunately they cannot do these round trips back. So you really need to bring the network, compute, storage, everything much closer to those objects, these IoT enabled devices for them to become smarter. I think more than the end user experience, which is also a very key case, a lot of physical objects that are becoming more internet-enabled is driving this trend because they always need to be connected and they always need to process data. They need to get smarter while processing the data without much latency. So that big trend of IoT is actually driving edge computing. 

Nick Reddin: That’s a great point. We don’t always think about the IoT even though we all have IoT devices on us at almost any given time during the day, whether it be from an Apple watch to our cell phones to whatever. My car is IoT enabled to give feedback and all kinds of information is feeding to the manufacturer as well as to myself and to my app for my car. It’s really fascinating and necessary as well. It makes sense that they would partner with a Verizon or another cell company that already is used to having this kind of ubiquitous coverage everywhere and along mainstream highways as well. They’re really going after it in a large way, which I think is probably going to really help them separate from the competition too. 

Satya KG: Yes, I agree. 

Nick Reddin: Great. We’ve gotten to the end here and it looks like we’ve seen a couple of questions come in. We’ll give some time here for any other questions that might come in. I’ll turn it over to Kelsey here in just a second and see what we have. 

Kelsey Meyer: I do have a couple of questions that have come in over the course of the talk. I will go ahead and start with the first one, not in any other particular order. How many services total did AWS launch at re:Invent ‘19? 

Satya KG: Around 42 new services have launched at re:Invent. This doesn’t cover the minor enhancements to the existing services, which probably takes us to more than like 100 or 200. For new services themselves, completely new offerings to the market for the first time, we are talking about 40 to 42 new services. 

Kelsey Meyer: What is AWS Nitro? I have heard that as a buzzword. 

Satya KG: What Amazon has been doing is they have been providing the EC2 instances, which are like boxes that allow you to run applications. There are three layers within that stack, which are the compute, storage and the network and the virtualization. What Amazon has done is ask, “Can we put the compute, storage, and virtualization independent of the EC2 instance itself.” That is where Nitro was the new kind of technology. They said, “I’m going to offload.” Today a server is bound by its capacity around what it can do, around compute and networks. Nitro allows you to isolate them, giving you centralized capabilities so that any EC2 instance is not bound by each physical capability of compute, storage, et cetera. It’s kind of a new mechanism where it says no EC2 instance should be bound by compute, network, or hypervisor limitation. It’s kind of an abstract system. It allows the entire EC2 instances to scale seamlessly. 

Nick Reddin: That’s pretty cool. It makes it very fluid and really helps customers’ efficiency tremendously along with their peak times. 

Satya KG: Exactly. A simple way to look at it is like it’s almost a chip in itself, but the chip doesn’t have any binding around circuit capacity and it can be as elastic as possible. 

Kelsey Meyer: The last question that I have is  “Is Amazon CodeGuru available?” 

Satya KG: WE talked about the CodeGuru service in one of the earliest slides. Right now CodeGuru is open to certain developers supporting certain run times. For example, it’s available for Java. It’s available for dot net. It’s available for some of the popular languages, but I think later this year it’s going to be generally available for a wide range of technologies. Right now it supports a limited set of languages and frameworks, but later this year they are going to open it up. You can say it’s in an alpha phase. 

Kelsey Meyer: That is all of the questions that we have for you, Satya. I just want all the attendees to know that if you continue to have questions feel free to email Satya or check our website, send them a chat, that kind of thing. We can answer you there as well. We would love to keep any conversations going on social media as well. Feel free to post to us there. We’ll be happy to get back to you. So thank you everyone for joining us. 

Learn More

Curious in what else Satya learned at AWS re:Invent or want more detailed information on the products discussed in this webinar? Contact us today or email Satya at satya@american-technology.net.

Nick Reddin

Recent Posts

Release Train Engineer vs Scrum Master: Which Career Path is Right for You?

In today's challenging job market, marked by layoffs, budget cuts, and recession fears, workers under…

1 year ago

Evaluating hybrid cloud for your business: Benefits and best practices

The introduction of the Hybrid Cloud in 2011 revolutionized global businesses that solely depended on…

1 year ago

From Rewards to Results: Building Next-level SaaS Sales Compensation Plans that Drive Growth & Motivations

SaaS companies typically operate on a subscription model, which makes their sales cycle more intricate…

2 years ago

The Top 6 Scaled Agile Framework (SAFe 6.0) Updates You Need to Know in 2023

For years, companies across industries have been adopting Agile approaches for greater adaptability and speed.…

2 years ago

Decoding the Differences Between Personal Vs Business Workflow Automation

The race to become future-ready is critical as organizations stand to gain 1.7x higher efficiency…

2 years ago

5 Scrum Anti-Patterns That Should Be Avoided At All Costs

Having a worldwide adoption of 87 percent, Scrum has unlocked a powerful way for companies…

2 years ago

This website uses cookies.