The 2020 Jeep Compass has a 4×4 snow mode that kept me from sliding into a ditch

When technology works, it’s amazing and even life-altering. You wonder how you communicated without a smartphone, or how you connected with people before social media existed. With cars, there are safety features that can seem almost magical. The car takes over for you and performs and operation you might not be able to do yourself.

Case in point is the 2020 Jeep Compass Trailhawk 4x4 and the Selec-Terrain feature I tested recently. It’s designed to intelligently manage the drivetrain for you in certain conditions, such as muddy roads or snow and ice. I tested the Snow setting for an entire week because, in my area, the snow moved in for several days in blizzard-like conditions.

2020 Jeep Compass

This 4x4 crossover kept me centered on the road during an ice storm, even preventing the car from fish-tailing when I overcompensated. I was amazed at how the Compass Trailhawk intelligently managed power evenly to the tires even though I was fighting against the algorithms at times. According to the Jeep reps who told me about how Selec-Terrain works, the car automatically detects slippage on tires when you use Snow mode. The vehicle then provides more torque to the wheels that do have better grip on the road.

In real-time, it’s an interesting sensation. A passenger with me also felt the change – the car seems to take over slightly. What would normally happen – you’d fishtail and slide into the ditch – feels more like guardrails on the car that keep you straight. And, you don’t need to do anything differently other than keep driving and avoid overreacting.

2020 Jeep Compass

Another feature related to Snow mode is that the Jeep will use second gear when you first start out on the road, avoiding the temptation to punch it too fast and then spin out. This is similar to what happens in other vehicles by Infiniti and in other 4x4 vehicles.

The Electronic Stability Control (or ESC) also kicks in when you use Snow mode. This well-known and common feature will constantly watch for strange behavior, such as the potential to fishtail, and attempt to correct you quickly.

The road ahead

In the future, I’m expecting cars to go to the next step. We won’t need to select the drive mode anymore, but instead, the car will know it is snowing or know that the road is muddy or wet. This might be based on current weather conditions but more likely the car will use sensors to interpret the road conditions in real-time.

It’s not rocket science, actually. Today, windshield wipers can detect when it’s raining by monitoring for wetness. They use a beam that knows when there is water obstructing the view and can even measure the amount of water. If it’s a light rain that is barely noticeable, the wipers won’t activate.

In a similar way, road sensors could look for environment conditions on the road or take readings from the city or sensors on the road. Icy pavement on a country road? A future car would know it should activate Snow mode for the drivetrain. We’d be able to focus even more on driving and know that the car is adjusting itself on the fly.

Hopefully this sensor technology debuts soon.

2020 Jeep Compass

On The Road is TechRadar's regular look at the futuristic tech in today's hottest cars. John Brandon, a journalist who's been writing about cars for 12 years, puts a new car and its cutting-edge tech through the paces every week. One goal: To find out which new technologies will lead us to fully self-driving cars.

Posted in Uncategorised

The Brake Coach feature on the 2020 Ford Escape actually works on the road

More details and more specific feedback in cars will change how we drive. When we know what to do, and when we’re provided enough information to make better decisions, we will respond by driving safer, smarter, and more economically.

The problem, of course, is that we sometimes lack enough information. For the person who constantly jams down on the accelerator, maybe he or she doesn’t realize quite how that impacts their fuel economy. Racing up to stoplights and braking suddenly is also not the smartest way to drive, but most cars don’t warn you about that.

2020 Ford Escape

In a recent test of the redesigned 2020 Ford Escape SE Hybrid, I really liked a new feature called the Brake Coach. Many hybrids provide some feedback about braking and acceleration and offer a real-time graph showing how your driving impacts fuel economy. Yet, Brake Coach shows you a real-time percentage indicator of how effectively you are braking.

Here’s how it works. As I was driving, the Escape would analyze how long and how hard I was pushing down on the brake. I learned over time to push lighter and hold down longer, which increases the amount of regenerative braking that occurs.

This means the Escape can generate more electric power when you brake, but if you race up to a stop sign, you won’t add as much power. Ford changed the interface from the last time I’ve seen this indicator and it is now more obvious and on by default so you can see the results in a big, bold display.

2020 Ford Escape

In practical terms, the car was teaching me how to drive. In several test runs, I noticed how I could brake longer and reach 100% – and, in a few cases, I didn’t quite brake properly and only received a return of about 80% or less. The fact that this all occurs in real-time without having to inspect a complicated display is helpful. You brake, and the Escape tells you how you did.

In previous tests, I recall seeing a gauge that had a circle and a percentage -- it was more colorful and clean in some ways, but I prefer the simple percentage amount. That’s the goal anyway -- to learn how to brake in a hybrid more effectively.

What's around the bend 

I can imagine what future cars will do. As we start connecting to more cars next to us on the road, we might see indicators for how well we have passed or merged, or whether we are driving safely around other vehicles. If we make a wide turn at an intersection, a future car might read sensors on the side of the road and on signs to show how close we came to it.

2020 Ford Escape

Today, this isn't really possible because GPS is not accurate enough and the sensors aren't available (although stoplights in places like Las Vegas do communicate with cars). I’m envisioning a day when there are sensors everywhere, and the opportunity for feedback like the Brake Coach actually helps us to drive safer and better.

We will have to be careful about how much information we actually show the driver, of course. And, the driver should have the chance to tweak all of the settings and pick the coaching features that make the most sense.

This will prepare us for a future when we let cars drive on their own, and all of this 'coaching' is actually more suited for the AI in a car and works as a guidance and control system. For now, I just like how Brake Coaching helped me squeeze out a little more fuel economy.

2020 Ford Escape

On The Road is TechRadar's regular look at the futuristic tech in today's hottest cars. John Brandon, a journalist who's been writing about cars for 12 years, puts a new car and its cutting-edge tech through the paces every week. One goal: To find out which new technologies will lead us to fully self-driving cars.

Posted in Uncategorised

What is AWS Outposts?

AWS Outposts runs the AWS infrastructure today and debuted in 2019. In 2020, Amazon will also offer a VMware infrastructure that also runs in the same hybrid model.

For those who demand low-latency performance for massive data projects, there is nothing quite like a hybrid cloud computing environment. This “best of both worlds” approach means you can run low-latency apps in your own data center or on local servers (or in a co-location center), and yet the on-premise IT infrastructure connects seamlessly to the cloud. This allows you to scale easily and take advantage of a flexible cost structure in the cloud. Yet, you also use local resources for those projects that have the highest compute demands.

AWS Outposts is an ideal product for those with low-latency compute needs. It might be an enterprise application for credit card processing for thousands of customers, or it might be a university research project trying to find new chemical compositions.

The product is a replica of what runs in a remote Amazon data center. It consists of the hardware infrastructure, all of the services, the API (Application Programming Interface), management tools, and operations of AWS with the same console.

AWS Outposts is a hardware rack in an enclosure -- a physical product that you deploy at your own site. Yet, it is supported fully by Amazon in that the company installs, operates, and supports the hardware rack with its own team of engineers.

If you picture the hardware and software that runs in the cloud in an Amazon facility, then Outposts takes all of this same infrastructure and makes it available at your facility. It also connects you to the cloud for additional online storage, performance, and services. Perhaps most importantly, AWS Outposts connects you to the closest remote Amazon facility in your area to ensure the best throughput, compute power, networking, and storage in the cloud.

One key component of how this works is that companies who cannot afford to have any interruption in service and have major research projects, internal applications, or a legacy customer management system running in real-time can then scale up to the cloud. They can continue using on-premise infrastructure, but as the project grows and scales, they can move some of their projects to the cloud -- or even fully to the cloud eventually.

One example of a company that might use Outposts: If a major retailer already has many of its own internal applications, they might deploy Outposts as a way to continue running low-latency applications to process transactions. Yet, they can then also scale up as they add more products and more stores, taking advantage of the cloud computing connections.

Benefits of using AWS Outposts

This leads to an important overall advantage of streamlining IT operations in general for the entire company. One portion of your team will not have to manage and support the on-premise infrastructure, installing patches and managing security -- while another team manages the cloud environment and all that it entails. Instead, with one console managing both, and exactly the same services running locally and in the cloud, your IT staff can focus on the actual projects. The business objectives of the applications and data become paramount.

To understand the benefits of running AWS Outposts, it’s important to first cover a few of the AWS services that are available. This is one of the most important benefits -- that you can run AWS services that normally are available in the cloud as local services.

For example, Outposts lets you run Amazon EC2 (Elastic Compute Cloud) locally. This means you can run virtual servers inside your own facility as though they are in the cloud, expanding and reducing servers as needed for your applications and data. You can also run Amazon Elastic Block Store (or EBS, which is a traditional block file storage service).

This year, Amazon will also offer Amazon S3 (Simple Storage Service), the primary cloud computing storage service that powers many web applications and websites, for AWS Outposts. Once you deploy the services you want to run locally at your own site, you can extend those services to the cloud -- using the same console interface and the same Amazon services.

What this all means is that companies have a unique infrastructure that runs locally and in the cloud, but all of the services are the same and use the same interface. This simplifies the management and maintenance required for your entire computing infrastructure.

The alternative to this is, quite frankly, a bit of a nightmare. Hybrid products usually provide a completely different set of management tools, tools to install, hardware to manage, and interfaces that are designed to help you connect to the cloud but often end in frustration. Because some hybrid on-premise products are not designed or developed by the same company as the cloud provider itself, it’s a confusing and complex undertaking.

Posted in Uncategorised

What is AWS AppSync?

Applications that rely on data in cloud storage do not need to stay current every minute of the day. Think of a social media app. The is “real-time” data such as a new post or a photo upload, but most of the data such as account information, user profile, and the place you went to high school do not need to update constantly. In a gaming app, there is a massive amount of real-time data such as your current location on a map (which is ever-changing), but your credit card number will likely stay the same month and month. To constantly update all data for a mobile or web app doesn’t make sense and only consumes uses unnecessary resources.

AWS AppSync is a way to synchronize the data used in a web or mobile app, allowing developers to choose which data should be synced in real-time.

AppSync relies on a GraphQL, which was originally developed by Facebook, for the data syncing. It’s intended to help developers who might need to pull data from different sources in the cloud, then perform functions within the app quickly and efficiently. It’s also highly secure so even though an app is syncing from multiple data sources, and developers are choosing which portions of an app can use real-time data or not, the data is still protected.

As mentioned, the application development service is intended for those who need to deal with massive amounts of real-time data and have that data sync to the application. Yet they also need the ability to decide which data does not need to sync in real-time. Developers can create complex queries that use a cloud database and aggregate the data or make complex decisions to analyze, process, or manipulate it from multiple sources.

The advantage here is that you can easily scale an application and use multiple Amazon services for your application, without being restricted by your IT infrastructure or where the data resides (and if you need to process all data in real-time).

Another advantage is that this can work with data that is offline for periods of time. In a gaming app, for example, the developer can sync real-time data but also coordinate what happens when the end-user continues to use the game and rack up a high score when he or she is no longer connected to the Internet. AppSync can them sync the offline data once the user makes a connection again without having to sync the entire data set. This reduces bandwidth requirements and speeds up data syncing for the web or mobile application.

Examples of using AWS Appsync

One example of using AWS AppSync is with a Big Data project. Often, with a research project at a large university, for example, the data sources are widely distributed. For a project analyzing new road construction, there might be data available related to material research in Zurich and environmental data from a lab in Munich, but the app development team is based in Chicago.

In the past, syncing all of this data for an app, and also deciding which data is mission-critical and must be real-time in nature and which data can be stored long-term and not synced, was quite an undertaking. It often requires a combination of multiple cloud services and a way to sync all of the data sources manually. Yet, AWS AppSync provides one console so that developers can understand their API and what is happening with their data.

Another example of AWS AppSync in practical use is when developers are creating a smart home app, one that monitors home security and safety issues.

Sensors might be installed to detect water leaks, look for intruders, and monitor whether a window has opened suddenly in the middle of the night. The Internet of Things (or IoT) is a concept that has allowed developers to create rich applications that unify and unite these disparate sensors to present a clear picture of what is happening in the home.

As you can imagine, pulling and monitoring this sensor data is a Herculean task. There might be thousands or even millions of data requests from an app -- e.g., every time someone opens a door or when a sensor detects a moving object. In a connected home app, some of the data can be at rest and won’t need to sync. With AWS AppSync, a developer can decide how to sync that data and what happens to it in real-time within the app, not only for the dozens of sensors that might be installed in a smart home but for hundreds or thousands of customers.

In the end, it’s the flexibility this provides that is key for developers creating rich applications that use multiple data sets from wildly varying sources from all over the globe.

Posted in Uncategorised

What is AWS Storage Gateway?

For a company that’s existed for some time or has relied on legacy systems, Storage Gateway is a godsend because the “old way” involves constant management and even additional staff members. With cloud services, those staff members become more available for online storage planning, strategy, and other tasks while the storage infrastructure operates more efficiently. This can also be gradual and coordinated so that you adopt the cloud at your own pace.

Moving data to the cloud is not quite as simple as flipping a switch. For companies that have managed their own data centers or server rooms for decades, there are a few steps to consider -- and it’s not always wise to pull the plug on an internal infrastructure quite so quickly. If a startup uses on-premise business servers and then experiences unexpected growth, abandoning those servers doesn’t make sense (even if the long-term plan is to do exactly that).

AWS Storage Gateway is a way to bridge this gap for companies of any size. It’s a hybrid storage option that connects on-premise storage -- including age-old tape backup systems -- to the cloud in a way that also provides one console to access all storage configurations.

This is accomplished through a virtual machine or by using a dedicated hardware gateway available from Amazon. Either way, the concept is the same -- AWS Storage Gateway allows companies to continue using on-site storage and backups but opens them up to the world of cloud computing to provide all of the benefits of lower costs, flexibility, less IT management overhead, and scaling up or down easily as a company changes and grows.

The Storage Gateway is available in three different versions. The Storage Gateway for Files is perhaps the most common and creates a connection between your existing storage infrastructure and AWS products such as Amazon S3 (Simple Storage Service), Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, and Amazon EBS (Elastic Block Store). Storage Gateway for Files is perhaps the most common offering for basic file storage needs. You might use it to perform backups of your local storage, create a content library that uses both on-premise files and the cloud, and to store the cloud database used by web apps.

Storage Gateway Virtual Tape Backup to the Cloud is the second offering and, as the name suggests, is designed for companies with a legacy backup system. More common than you might think, tape backup is still in use because of how companies created their storage libraries originally and how they need to keep using those same systems. The physical backup systems were likely expensive to install in the first place, and adding an entirely new backup system might create more problems for applications, data, and legacy storage.

A Volume Gateway is the third option available as part of Storage Gateway. With this offering, on-premise storage can attach to iSCSI block storage volumes. For your on-premise data, you can use a virtual machine or a hardware appliance that connects to AWS storage.

Benefits of AWS Storage Gateway

Companies sometimes get “stuck” using legacy storage systems or expensive hardware and software they purchased to manage their infrastructure. It’s a headache because your IT staff has to continually expand the storage, provision the volumes, patch the software, and maintain the entire system even though the infrastructure might be outdated or inefficient.

With Storage Gateway, the main advantage is that you can bridge the gap between still using those legacy storage systems but also moving to the cloud.

This means you still retain the on-premise infrastructure and won’t lose that investment, but at the same time you can move data to the cloud and expand and scale easily. It’s the best of both worlds, but also lets you embrace “the new world” of cloud storage.

Because AWS Storage Gateway uses one interface for all storage configuration, it is easier to use than relying on a legacy system -- which might involve multiple system-level interfaces running on multiple servers, backup systems, and storage arrays.

As with all cloud computing services, AWS Storage Gateway provides benefits in terms of scaling to meet demand. Yet, the costs are not fixed in a way that forces you to use certain services or pay for the infrastructure. You can retain the investment you have in local storage but also pay as you go when you expand and use the cloud for storage.

One less-than-obvious advantage to using Storage Gateway is that it is a way to move away from on-premise storage without a sudden change. You can gradually adopt cloud services and migrate your recovery backups, legacy data, and current data storage over to the cloud without a major disruption to service and without having to “start over” with cloud storage.

Posted in Uncategorised

What will happen when 5G works in trucks like the 2020 Nissan Titan?

The dream of the connected car has been somewhat slow to materialize. The infrastructure around us has not installed sensors and access points fast enough – unless you are lucky enough to live in Las Vegas. Stoplights, stop-signs, road barriers – they are all just as disconnected as they were 30-40 years ago in most areas.

In the meantime, cars have become increasingly more connected, just not to each other. Almost every new make and model these days offer some form of connectivity, mostly for entertainment purposes (aka, watching movies on long trips).

Nissan Titan

And speeds have improved with 4G LTE access, sometimes running at 20Mbps in some areas or even faster. A roving, moving hotpot is a smart innovation that could lead us to more connected services.

It’s smart enough that I was curious about the current state of this connectivity, how fast it runs, and if it is reliable enough for the coming age of more connectivity (and 5G service).

Easy streaming

I fired up the hotspot in a 2020 Nissan Titan recently, just to find out. In a remote area, conserving a little usage on my own smartphone and parked at the side of the road, I connected using an Apple iPad and handed one to a passenger in the backseat.

In modern vehicles, 4G LTE service is common – on the Titan, you can expect to connect at speeds fast enough to watch a movie on Netflix. The hotspot is easy to find on the main display, and for those of us who throttle our usage to make sure we don’t get overcharges, it can be an amazing perk on a long trip or while waiting in the Trader Joe’s parking lot.

Nissan Titan

Nissan requires that you login to a portal in your browser to get started, and the costs are fairly reasonable, at least compared to home Internet service in some areas. You pay $20 (about £15, AU$30) per month for 1GB of access; for 5GB of bandwidth per month, it’s $60 (about £45, AU$85) per month.

The main advantage here is that you don’t need a phone in the truck at all, or a hotspot of your own. You don’t have to 'provide' service with your own gadget, so it’s always available for whoever happens to climb inside the truck.

In my tests, the service was reliable and fast even as I was driving around town. There are no issues with the Wi-Fi not working because you are inside a building or because too many people are logged in. 

Strong and steady

One thing I’ve noticed about 4G LTE service in cars – it's reliable enough that we didn’t experience any stuttering or pauses. Maybe the Titan acts as a massive antenna (my guess is that the antenna is built into the roof, but this isn't publicly disclosed as near as I can tell).

That’s encouraging, because I know what’s coming next.

Nissan Titan

5G service will be faster, more reliable, more prevalent, and less susceptible to interference. Someday, 5G sensors and access points will be installed on the roadways, at exits, on bridges, and in other cars – forming a massive 5G network infrastructure. The fact that the throughput on 4G LTE is consistent is a good sign that cars will be able to handle this next step.

When will it happen? My educated guess is that it will be sooner rather than later. 5G is making headlines, and it’s probably going to roll out quickly in 2020, so automakers will likely jump on board with new services. It’s also here to stay – it’s an opportunity for cities to move forward with a network standard they know won’t be replaced or superseded anytime soon. We hope.

Nissan Titan

On The Road is TechRadar's regular look at the futuristic tech in today's hottest cars. John Brandon, a journalist who's been writing about cars for 12 years, puts a new car and its cutting-edge tech through the paces every week. One goal: To find out which new technologies will lead us to fully self-driving cars.

Posted in Uncategorised

What is AWS Direct Connect?

Imagine a healthcare company that has legacy data stored on servers in a back-office environment. It’s required that the data be accessible and available, but there’s no need to access it routinely for use in web or mobile apps at the local on-premise facility. By using AWS Direct Connect, the data is still part of the overall cloud computing environment but doesn’t depend on the public internet for bandwidth, reliability, or security needs.

Believe it or not, the internet is not the perfect, all-purpose solution to everything in business. While the rise of cloud computing, mobile apps, the web, and instant-access from anywhere has certainly changed how we do business, there are still times when a network connection is more viable when it is not channeled over the public internet.

One example of this is a company that has a mix of needs for high-throughput, low latency data transfer in the cloud and benefits from the scalability and flexibility, but also runs an on-premise data center or has a co-location facility where data is stored at rest and doesn’t need to be accessed on a routine basis by business applications. This data doesn’t even need to be web-accessible or on the internet at all. For those applications, what is required is a way to make the network infrastructure work for all of your business and data needs.

Enter AWS Direct Connect. This network service is like a bridge between the mission-critical activities for a business (say, a patient database in a hospital or a transactional system for an e-commerce website) and those that are not urgent and do not require a cloud storage solution. It’s a way to benefit from both worlds and to connect them to form one infrastructure.

Direct Connect, as the name implies, is a way to connect on-premise, co-location, or a back-office infrastructure to a cloud computing environment like Amazon S3 (Simple Storage Service) or EC2 (Elastic Compute Cloud). It’s a nod to the fact that not every conceivable application or data store benefits from the cloud, and that companies still have legacy applications. Yet, it brings those legacy apps into the cloud and makes them accessible.

One of the best examples of where AWS Direct Connect might fit the needs of a company is when there is a project that is not mission-critical. The data might be stored in servers at your headquarters but does not need to be immediately accessible. In fact, you may not need to access the data stores over the public internet at all. Direct Connect can help you create a network connection between this private data store at your location. The connection runs between Amazon AWS in the cloud but is not on the internet. This can dramatically reduce bandwidth costs because you are not relying on an internet provider, it reduces congestion and dependence on the internet, and it reduces the likelihood of internet bottlenecks.

Benefits of AWS Direct Connect

Interestingly, another benefit has to do with scaling your IT infrastructure management. While you may have fixed data center locations or back-office servers, you can still adjust the bandwidth for Direct Connect from 1 Gbps to up to 10 Gpbs, and you can still configure your infrastructure to meet the changing needs of your apps and data. You can add more virtual interfaces, provision more AWS resources, and make decisions about legacy data or data-at-rest to increase availability or throughput as your business needs change. In short, you still benefit from a cloud computing infrastructure and the decisions you can make about scaling the service up or down.

One of the most important benefits to using AWS Direct Connect is related to the costs. Legacy apps, data that is not mission-critical, or even projects, apps, and databases that simply don’t need high-performance throughput or scalability are sometimes still moved to a cloud infrastructure because there is no way to maintain the on-premise servers. This ends up costing more. With Direct Connect, there is a lower cost for the network access from on-premise to Amazon services like S3 or EC2, avoiding the internet altogether.

This means lower costs because you do not have to pay for internet access or any overage charges and because Direct Connect provides a lower cost structure. As you can imagine, the “direct” connection between on-premise and the cloud also reduces the possibility for congestion, network interruptions or faults, and interference on the public internet.

Direct Connect is easy to use. It runs within the AWS console interface, and there are templates you can use for configuring virtual interfaces to make it all seamless and obvious in terms of which data is being transmitted from which locations to the cloud. With the AWS console, you can view the entire infrastructure from one interface with a global view of operations.

Posted in Uncategorised

What is AWS CloudFormation?

The sinews and muscles that make cloud computing function are just as important as the web and mobile applications that run on top of it. While many companies are focused on the features available in the apps, increasing user adoption of an app, or focusing on revenue generated from a service that runs on the web, there is also the underlying infrastructure that makes those app work reliably and at a high-performance level. For the most part, a cloud computing service provider like Amazon (with AWS or Amazon Web Services) insulates developers, data scientists, and business owners from the complexity of the infrastructure.

Yet, there is also a great opportunity to tweak that cloud infrastructure in ways that help your company, the web and mobile apps you run, and your customers. The concept of “Infrastructure as Code” emerged a few years ago as a way to help companies manage all of the disparate services that run in the cloud. Previously, they may have used scripts or other tools to manage their IT infrastructure, but those tools were often hard to use and complex. It’s exacerbated further when your staff needs to manage provisioning, version control, and other variables.

While we like to think of cloud infrastructure as running independent of the apps and services we need to deploy, there are opportunities to provision services so that they all work together seamlessly, and to take advantage of new Amazon services. It means even more control over how the infrastructure runs and what you can do with your apps that run on top of it.

AWS CloudFormation, as the name implies, is a way to “form the cloud” -- meaning, it allows companies to manage and control the application stacks and resources needed for your web and mobile applications. It provides access to infrastructure components at your disposal and allows you to manage them all from one central command-line interface.

An example of what you can do: for those who are new to cloud computing, AWS CloudFormation uses templates to make the process easier (essentially, it’s a JSON or JavaScript Object Notation file you can use to track and manage resources). With templates, you can define and track all of the AWS resources you need. It takes the guesswork out of the infrastructure management part of cloud computing. Pre-defined templates make this even easier, providing access to the most used resources in a way that is ready to deploy.

Once you have selected a template (either as a JSON file or a pre-determined template) you then upload that configuration file to CloudFormation. The “infrastructure as code” concept comes into play here because you are using a piece of code (the JavaScript Object Notation file) to manage and control all of the resources, including the application stack, storage, servers, memory, and any other variables you need for your applications.

Benefits of AWS CloudFormation

As you can imagine, using AWS CloudFormation means there is one primary method of controlling the infrastructure rather than a disparate set of parameters and controls. Once you configure the template and upload it, running the infrastructure the way you want it to run is a simple matter of “running that code” in the cloud. The single template or a series of templates you create becomes the one way you manage the AWS infrastructure.

Because of this one command center approach, it is also easier to replicate and deploy another infrastructure for additional application stacks using the same template. This also makes it easier to deploy an infrastructure for testing and development purposes. This provides more flexibility in how you develop and test business apps, and how you stress test and add additional services for your infrastructure without the confusion of having multiple points of configuration.

Because of this flexibility in how you control and manage the infrastructure, the CloudFormation templates have exactly the same benefits as a normal piece of software code. This includes version control for those templates, the ability to author the templates in a programming language just as you would any other app, and also to work together as a team to analyze the application stack, AWS resources, and performance variables as needed.

Another benefit to managing your infrastructure in this way is that you automate the entire process. Once your templates are all configured and ready to deploy, and your team has worked together to tweak all of the settings, deploying the template is incredibly easy -- it is just a matter of uploading that template and deploying it within CloudFormation.

One additional benefit, as is usually the case with any cloud infrastructure process, is that you can scale up easily with increased demand or when you need to deploy more apps to a larger group of users. You can replicate the templates in CloudFormation and launch an entirely new infrastructure with new applications without reinventing the wheel.

 

Posted in Uncategorised

What is Amazon S3?

Amazon S3 is the foundational product in the Amazon Web Services lineup. Well-known and well-regarded, it is the bedrock for most of the Amazon cloud computing services available. Without it, many web applications we use would not function, including Amazon.com itself.

To understand what S3 is and what it does, it’s important to start at the beginning and define the concept of object storage. Unlike the files stored on your own laptop, which use a hierarchical block storage system invented decades ago, Amazon Simple Storage Service (or S3) uses object storage which stores data as an independent object. An object can store related metadata and an object identifier. With object storage, there are not the same limitations in terms of reliability, speed, storage location, or flexibility as traditional file storage.

With S3, cloud storage is not only possible but provides a high level of functionality for Big Data analysis using a data lake, incredibly complex social media apps like Facebook, vast data warehousing initiatives at research institutions, data modeling at companies like Ford to develop materials used in cars, and mobile app development to run on a wide variety of devices for millions of users, and e-commerce platforms like Amazon.com.

Introduced in 2006, Amazon S3 made waves right away because of how flexible it is for a wide range of industries. The health care sector can trust that it is secure, reliable and meets compliance regulations for storing electronic health records. Big Data companies can rely on the fact that it is robust enough to handle massive quantities of data at Petabyte scale. Automakers can rely on S3 because it has the performance required in a highly competitive market to manage and track data of all forms. Universities and even government entities using S3 don’t have to worry about uptime or security issues. Large corporations know that S3 is capable of scaling to meet user demand, which can fluctuate widely based on user trends and market needs.

In short, S3 became one of the founding technologies for the Internet, e-commerce, web applications, social media, Big Data analytics, data discovery, mobile apps on your smartphone, Internet of Things devices in the connected home, business dashboards, and just about any other tech trend you can think of -- and a few you might not even know about.

S3 is also reliable. Amazon states that the service provides “the eleven 9s” of reliability (or 99.999999999% uptime), which meets the demands of any mission-critical application. It’s no wonder that Netflix, Airbnb, Pinterest, Dropbox, Reddit, and thousands of other services use S3 and would potentially not even exist if it wasn’t for the cloud storage product.

Benefits of Amazon S3

In some ways, Amazon S3 is such an important product for cloud computing that it can help you understand the benefits of the cloud itself, not just this Amazon product. It all starts with scalability. Any user can sign up for S3 for free and start using object storage for a simple in-house app you build over lunch. The console is not designed only for developers and computer scientists, and the service is not meant only for massive companies to process massive amounts of data. Even startups can sign-up and use Amazon S3, gaining access to a platform that provides 5GB of storage, 20,000 “get” requests for an app, and 2,000 “put” requests without having to build a data center. (The “put” and “get” terms have to do with how object storage works -- they refer to the data upload and download functions.)

In fact, while S3 can work with an existing data center, it doesn’t require that you have any IT infrastructure in place -- no servers, no memory, no storage allocations. Amazon S3 works within the Amazon ecosystem with services like Amazon Glacier (for long-term back-ups) or Amazon Cloudfront (for distributing content securely).

For that startup with one app using the base level S3 object storage allocation, it is possible to then scale up with S3 to handle massive growth -- even becoming the next Facebook or Instagram app. As with cloud computing in general, there are no limitations in terms of lacking the needed infrastructure, performance capability, or security requirements.

As your needs change, and as your data needs evolve, you don’t need to suddenly build a data center and learn about server configuration and storage arrays. Related to this are the benefits of usability, cost structures, security, networking, performance -- in short, S3 matches up with all of the typical cloud computing benefits to handle growth in users and your own needs.

As your business changes, S3 keeps pace without requiring that your staff build out a storage infrastructure, add more servers and storage, or even deal with any of the typical IT-related complexities that come with a complex, technology-enabled project.

Posted in Uncategorised

What is Amazon Corretto?

Limitations. That’s sometimes a word that comes into play when developers start building applications. They discover the limitations of the development engine or the IT infrastructure needed to support an app. They discover limitations in how they can distribute the app and who can use it on which devices. Or they find out that there are limitations related to costs or support available for the development environment you use. It can lead to frustration and, for many companies, a serious problem when it comes to scaling the app for more users.

Amazon Corretto is a production-ready distribution of OpenJDK, and it’s designed to provide everything you need to create and run Java applications without limits. Open Java Development Kit, or OpenJDK, is a popular distribution of Java used by developers to create rich applications, and Corretto is an Amazon implementation of OpenJDK. It’s compatible with Java SE, which is the standard for web applications released back in 2006.

The concept of Amazon Corretto was born out of an internal need at Amazon. The company runs thousands of apps, both internally and externally for customers, and they needed to create a reliable distribution for their own developers to use, one that is secure and has standards in terms of documentation and process, and one that is used for all of their apps.

Released to the public in 2016, the distribution is intended for companies with similar goals -- to create Java applications that can scale and run reliably, use the latest security features, and can be fixed in a uniform way within a well-known and commonly used distribution.

An example of this might be a new business app for tracking company expenses. Normally, this might require that a company look into open source distributions that are not officially supported as a way to cut down costs, and then figure out all of the necessary support variables such as patching, endpoint security, and bug fixes on their own. This becomes a two-pronged effort -- to create the actual app itself and then to figure out how to update and maintain the app.

One of the benefits of Java is that it is remarkably flexible in terms of the apps you can create. It can also run on Linux, Windows, and Mac platforms. This, however, could also be a downside if it leads to an open-source nightmare of support and disconnected, unreliable apps.

Without Corretto, development teams can descend into chaos -- not for a lack of internal process, but instead related to how apps are supported, documented, and distributed. In fact, the distribution allows companies to focus efforts on the actual development and not on the framework they are using and whether it works reliably and is well supported.

The distribution will run on Linux, Windows, or Mac and is available as a free download. The distribution can run using cloud services, on-premise, or on a local machine. That flexibility in how you run the distribution means teams can focus more on actually creating apps.

Benefits of Amazon Corretto

A key benefit to Amazon Corretto is that, while it is entirely free to use, it is still supported by Amazon in the sense that the company will continue to provide patches, security updates, and bug fixes for a predetermined time period. For example, with Corretto 8, Amazon has guaranteed they will provide support through 2023 for all OpenJDK apps. The latest version available for download, Amazon Corretto 11, is supported through 2024.

This support includes all of the security enhancements that will help you reduce the likelihood of a data breach, which is a concern given that web apps can be an attack vector for hackers. Amazon also provides all of the documentation you might expect from a commercial distribution, all of the patches related to performance and reliability, and all of the bug fixes.

This provides a major advantage in terms of performance. Often, with a web application that is not well-supported, there are issues with web apps breaking down due to code inconsistencies or a lack of patches or security fixes. Amazon makes sure the distribution runs at an optimal performance level (which is important not just for customers but for Amazon’s internal apps).

Underlying the flexibility in how you use the distribution, where you use it, and on which devices and platforms is the fact that Amazon Corretto has no related costs. Anyone can download the distribution from aws.amazon.com/corretto and get started creating apps right away.

The distribution is constantly being tested and updated, thanks to the fact that Amazon uses it for thousands of their own apps and therefore depends on the distribution. This includes apps they offer to external customers and to the internal apps they use for employees.

Unlike some open source distributions, Amazon Corretto provides a platform to create reliable apps with full support and documentation so you can focus on the app itself.

Posted in Uncategorised

The 4×4 dashboard display on the 2020 Nissan Rogue shows how automation works

Being able to see what a car is doing in real time is a goldmine in the age of automation. It’s one thing to know there's intelligent behavior and AI helping us drive, but it’s even more valuable to see how the technology is working and what it’s actually doing.

One example of this is the 4x4 display that appears in the 2020 Nissan Rogue (a sporty crossover). It’s not fancy in terms of innovative technology, but it reveals the magic behind the curtain. As you drive, the vehicle monitors each tire in real-time and can apply more power as needed as the vehicle senses any problems, such as careening on a slick road.

The science of slippage

I tested the AWD tech in the Rogue on a snowy day and drove through a few snowbanks to see how the computer-controlled tire slippage monitors actually work. It’s interesting to see what is actually happening in terms of deploying more power to the front and rear tires as you drive. I haven't seen that in other cars.

It turns out most of the power is always in the front, but during those tests, I watched as the Rogue applied more power to the rear and tried to evenly distribute the power when I would get stuck slightly in the snow.

2020 Nissan Rogue

A word to the wise if you ever try this: I wasn’t testing a huge snowbank or deep snow. AWD vehicles are meant to handle the precarious conditions of normal winter driving, but they are not actual 4x4 vehicles that can power through a huge snow pile with ease. I prefer the AWD tech myself – I mainly want to arrive at my destination and not get stuck or slip on the road.

In one case, I noticed the car started to fishtail slightly, and the rear wheels kicked in a bit more than expected. It was cool to watch this all in real-time because I’ve tested many other AWD cars, including those that use computer-controlled slip detection, and felt the effect as I was driving, but I’ve never seen an interface that showed me what was happening.

Spectator sport

This is the future of driving because we all want to know more about what an AI is doing and how an autonomous car is making adjustments.

These days it may seem like magic, but of course, it is really just algorithms and programming code doing the thinking for us. A developer is always behind the logic when it comes to braking or swerving, but when you experience it in the car, it feels like someone has taken over – someone that seems like a robot.

My sense is that drivers will want to know more and see feedback like the 4x4 display in the Rogue because they won’t be driving as much. They will want to see an interface that shows real-time adjustments, suggestions, and maneuvers as they occur.

The 4x4 mode is a step in the right direction – now we just need even more tools that allow us to monitor what the bots and automations in cars are doing.

2020 Nissan Rogue

On The Road is TechRadar's regular look at the futuristic tech in today's hottest cars. John Brandon, a journalist who's been writing about cars for 12 years, puts a new car and its cutting-edge tech through the paces every week. One goal: To find out which new technologies will lead us to fully self-driving cars.

Posted in Uncategorised

What is Amazon SNS?

Not all “messages” sent by a web or mobile app are meant for end-users. In fact, as you use a social media app like Twitter to schedule an upcoming post, or check your feed on Facebook, there might be thousands of “micro” messages sent to and from the app that are not even readable by humans (or intended for them). Think of a gaming app that must record the high-score for a multiplayer game or track a micro-transaction such as buying a new item in Fortnite. These messages are important -- they might track the progress of a gamer or make sure a patient's health record is up-to-date in a hospital. Often, they are sent from one mobile app to another, or from a web application to a mobile app (and vice versa).

Tracking all of these messages, including the ones that actually are sent to the end-user and show up in the end-user interface, is quite an undertaking. There are thousands or perhaps millions of messages and they all require safe and efficient delivery, a network capable of transmitting them securely, and the supporting servers, storage, and compute resources.

That’s what makes Amazon SNS (Simple Notification Service) so valuable to companies with Big Data needs (or even those with “little data” needs). Known as a “pub/sub” (which stands for producer/subscriber) service that runs in the cloud, SNS is an “always-on” service that transmits messages securely within applications. Users subscribe to a topic or developers configure the topics for transmission, and SNS handles the message broadcasting.

Imagine an app that handles the ticketing required for a major event. The developer has to think about the end-user interface and the features and has to make sure it is easy to understand and use. On the back-end, the app has to manage all of the event details such as seat locations, prices, times, and dates. SNS helps by managing and transmitting these messages and tracks all of the details and the changes that occur for hundreds of thousands of user accounts.

Another easy-to-understand example of this is the required “one-time password” (or OTP) an app may require for secure account authentication. Amazon.com uses an OTP to keep your account secure; it’s required to login and adds a second level of protection. The end-user receives a code they have to type in, and the same code can be used to authenticate other apps and to communicate with a web app, desktop app, or mobile app for smartphones.

SNS is the mechanism for sending, managing, and tracking these messages, even when they are transmitted between apps and not viewable by the end-user. SNS can also transmit other micro-messages such as text messages, emails, and mobile push messages used to notify users of news updates, security issues, and other concerns. The actual usage scenario for how the messages will be transmitted is not as important as the fact that the infrastructure supports a wide variety of messaging types and devices. 

Benefits of Amazon SNS

That’s the most important benefit of Amazon SNS -- the flexibility in how you use it. First-time users can sign-up for the service in the AWS Console (or using the AWS develop kit or the command line interface), and from there the options are almost limitless in terms of configuring the messages, how they are transmitted, and for which purposes.

In cloud computing, the word “durable” is sometimes used to describe a service like Amazon SNS. In this context, it means the service is hardened and reliable -- there is little concern about whether the messages will actually reach their intended apps. In the scenario mentioned previously with a gaming app, if the messages need to be sent between apps to track high-scores or items, a developer can trust that Amazon SNS will deliver the required messages on an IT infrastructure that is maintained, optimized, and scaled independently. Developers do not need to become experts in managing a messaging server and all of the related technology dependencies, such as compute performance, storage, and endpoint security.

Related to all of this is the ability to easily scale the messaging platform, which is the bane of many developers who build a new app and then suddenly feel the pain of growth in users, services, and features. It usually causes serious problems with reliability and unpredictable costs related to improving the infrastructure to keep up with the messaging levels.

Amazon SNS removes this as a bottleneck for growth. The service means that you don’t need to do any of the normal planning, provisioning, network monitoring, and patching for a messaging service. And, it means all of the scaling occurs automatically as it happens, even for bursts in user growth (or sudden downturns) and messaging traffic related to sudden growth.

In the end, the developers can focus on creating the application itself and the content of the messages, without concern for the actual messaging platform and reliable.

Posted in Uncategorised

What is AWS Data Pipeline?

Applications rely on a treasure trove of data that is constantly on the move -- known as a data pipeline. While there may be a vast amount of data, the concept is simple: An app uses data housed in one repository and it needs to access it from a different repository, or the app uses one Amazon service and needs to use a different one. It might be due to the business requirements changing or that you need to use a different database entirely. It might be due to a new reporting need or a change in the security requirements. This data pipeline can involve several steps -- such as an ETL (extract, transform, load) to prep the data or changes in the infrastructure required for the database -- but the goal is the same: the act of moving the data without any interruptions in workflows and without errors or bottlenecks along the way.

Fortunately, Amazon offers AWS Data Pipeline to make the data transformation process much smoother. The service helps you deal with the complexities that do arise, especially in how the infrastructure might be different when you change repositories but also in how that data is accessed and used in the new location. An example of this might be a specific executive summary that is needed at a certain time of the day that provides details about transactional data for an app that handles user subscriptions. Moving the data is one thing; making sure the new infrastructure supports the reporting you need to find is another.

Essentially, AWS Data Pipeline is a way to automate the movement and transformation of data to make the workflows reliable and consistent, regardless of the infrastructure of data repository changes. The service handles all of the data orchestration based on how you define the workflows and is not limited to how you store the data or where it is stored. The tool helps you manage the data dependencies and automate them and also handles the data pipeline scheduling you need to do to make sure an app, business dashboard, or reporting works as expected. The service also informs you about any faults or errors as they occur.

It won’t matter which compute and storage resources you use, and it won’t matter if you have a combination of cloud services and on-premise infrastructure. AWS Data Pipeline is designed to keep the process of data transformation straightforward, without making it more complicated due to how you have the infrastructure and the repositories defined.

Benefits of AWS Data Pipeline

As mentioned earlier, many of the benefits of using AWS Data Pipeline have to do with how it is not dependent on the infrastructure, where the data is located in a repository, or even which AWS service you are using (such as Amazon S3 or Amazon Redshift). You can still move the data, integrate it with other services, process the data as needed for reporting activities and for your applications, and perform other data transmission duties.

All of these activities are conducted within an AWS console that uses a drag-and-drop interface. This means even non-programmers can see how the data flows will operate and how to adjust them within AWS without having to know about the back-end infrastructure and how it all works. As an example of this is when data needs to be accessed within an S3 repository -- in the console, the only change to make is the name of the repository within S3. The end-user doesn’t need to adjust the infrastructure or accommodate the data pipeline in any other way.

AWS Data Pipeline also relies on templates to automate the process, which also helps any end-user adjust which data is accessed and from where. Because of this simple, visual interface, a business can meet the needs of users, executives, and stakeholders without having to constantly manage the infrastructure and adjust the repositories. It speeds up the decision-making for a business that needs to make quick, on-the-fly adjustments to how they process data and the new reporting, summaries, dashboards, and data requirements.

A monthly subscription fee for AWS Data Pipeline makes the service more predictable in terms of the expected costs, and companies can easily sign up for the free base level subscription to see how it all works using actual data repositories. And, because the service is not dependent on a set infrastructure in order to help you move and process data, you can pick and choose which services you need, such as AWS EMR (Amazon Elastic MapReduce), Amazon S3, Amazon EC2, Amazon Redshift, or even a custom on-premise database.

Related to all of this (the simple interface, low cost and flexibility) is an underlying benefit of automated scaling. Companies can run only a few data transformation jobs or thousands, but the service can accommodate any requirements and scale up or down as needed.

Posted in Uncategorised

What is Amazon DynamoDB?

A database is the heart of any application. It’s where a web application stores all of the user information such as credit cards, phone numbers, and home addresses. It’s what an internal business dashboard uses to track all of the reporting functions that show the health of your firm. It’s how a massive e-commerce website tracks all of the product information such as product name, price, features, and SKUs (stock-peeping units). Without a database, there would be no applications -- on the web, on your phone or tablet, or on a computer.

Fortunately, a cloud database can deliver all of the benefits you might expect such as auto-scaling, high reliability, and fast performance. And, a modern database can benefit from advances in technology that make the database much faster and more efficient.

Amazon DynamoDB is a database that operates in the cloud, but it’s also one that operates more efficiently, faster, and with better security than a traditional on-premise database or even a cloud-based database that lacks the high-performance features.

To understand what it is and how it can benefit your company, it’s important to explain some database terms. Amazon DynamoDB is a key-value database, which is a way of describing how the data is stored. Unlike a traditional relational database such as SQL that assigns a descriptor to each field, a key-value database stores data in a nonrelational way using keys. This type of database using something called an “associative array” to store the records.

Because this is a more advanced form of a relational database using key values, it’s known as NoSQL which stands for Not Only SQL and means the database is meant for high performance and is enhanced by the fact that the database using key values.

While this might all sound technical, it’s an important distinction because Amazon DynamoDB is built for speed and performance; it’s intended for those with massive data throughout needs. It’s also a highly efficient way to use a database in an application, especially if the database contains millions of records. To give you an idea of what this means, Amazon DynamoDB can handle 10 trillion requests per day and at peaks of 20 million requests per second.

In practical terms, it means there are few business applications that would stress the database engine or cause issues in terms of reliability, uptime, scaling, or performance. That’s why large companies such as Lyft, Toyota, and Capital One use Amazon DynamoDB as their database engine of choice. When there are millions of concurrent users accessing a credit card database at the same time, or many millions of passengers accessing a ridesharing app, the DynamoDB database can not only keep pace but provide nearly instant results.

Benefits of Amazon DynamoDB

When your staff are free to focus on the actual application and not running the database and the supporting IT infrastructure management, it leads to better applications, services, business dashboards -- and better overall company support and service to end-users.

That’s one of the main advantages to using DynamoDB -- it is particularly efficient and fast, which helps companies that need that level of throughout and performance to meet the demands of customers. There’s no concern about “can the database keep up” because the platform can scale up or down as needed to meet dynamically changing user requests.

Amazon has called this “virtually unlimited throughout” and it means there are no bottlenecks -- in fact, the service is designed to provide a single-digit millisecond response time.

As with most Amazon cloud computing services, DynamoDB is designed to operate without direct involvement from your own staff. That means you don’t have to configure or setup the database itself, manage the related infrastructure such as the servers, networks, or online storage, and you don’t have to maintain the database. Your team doesn’t have to think about whether the data is secure, safe from hackers and data breaches -- that responsibility falls on the cloud provider. There are no requirements related to provisioning or patch the data.

The huge benefit here is that your company is free to focus on the application itself, not how the database is managed and maintained. It means you don’t have to become experts in infrastructure management, server provisioning, storage allocations, or any of the related technologies that are typically needed to make sure the data is available to apps.

An important benefit for Amazon DynamoDB specifically is that it is ready for enterprise-grade applications -- the kind that involves millions of users. In the example mentioned previously related to credit card data, as a company scales up and acquires millions of customers, there are no sudden requirements related to archiving and storing the data even as the database grows to petabyte-scale and no need to radically improve endpoint security.

Posted in Uncategorised

What is Amazon Kinesis?

A constant flow of information in business is more like a firehose than any of us want to imagine. It’s a steady stream, often bursting at the seams, and it doesn’t let up. Tracking and managing that firehose of data can be a Herculean task because you’re always playing catch-up -- analyzing the data after it has been transmitted and generating reports at a later date. The problem, of course, is that “the later date” never comes and sometimes we process reams and reams of data, store it and archive it, without ever really analyzing it.

Amazon Kinesis is a real-time cloud analytics engine that collects, analyzes, and reports on data transmission in your company as it happens, using the power of cloud computing services. You can think of it as a pipe that connects two firehoses -- the data comes in, Amazon Kinesis can analyze it as it transmits, and then the data keeps flowing. Instead of using an analytics dashboard or generating that are only helpful later, you can make business decisions and respond to data flows in real-time.

As an example of this, think about a vast multiplayer gaming system where thousands or even millions of gamers participate online. Talk about a steady flow of data. There are thousands of data requests in terms of player movements, rewards they collect, and achievements in the game not to mention the transactional data -- what they purchase in the game and their records in terms of what they can access and how they pay for their membership.

For a company running a gaming network like that, it’s easy to see how complex the interactions are between gamers, the network, and the IT infrastructure management required to host the game. Amazon Kinesis tracks the interactions at every level as it happens. For example, if there’s a new high-score for a multiplayer battle -- say, completing a level in record time or defeating every opponent -- Amazon Kinesis can report on this information within seconds.

For other data throughput scenarios, real-time analysis and reporting are incredibly valuable. This can include scenarios such as connected smart home devices and Internet of Things devices, medical records for a large hospital chain, data collected from a car and analyzed for machine learning application such as autonomous driving, or even on a website to find out exactly which sections of an e-commerce website someone visits and where they click.

Benefits of Amazon Kinesis

One of the key benefits to Amazon Kinesis is that the business decision-makers, developers, analysts, and data scientists are all better informed and able to make better (and faster) decisions. The firehose of data is not so overwhelming. The rich analytics and business intelligence is all available right away, not just as part of a report or a dashboard that summarizes findings. There’s no need to wait until all of the data is collected and put under a microscope because the microscope is constantly looking at the data as it flows on the network.

There are quite a few practical benefits to this, starting with the ability to improve your service. In the case of the multiplayer gaming system, there are tradition benefits such as reporting back to the user on high-scores and achievements, but perhaps even more importantly it means companies can examine and problems on the system in real-time and deal with them as they occur, not after the problems have caused users to unsubscribe.

Another practical benefit has to do with the speed of decision making. Because the analytics are provided in real-time as they happen, companies can then make real-time decisions about that data. In the example of the autonomous car, an automaker might run a simulation in a crowded urban area to see how the robotic vehicle performs, and make decisions about the GPS fleet tracking, location data and the sensors that track how the car adjusts its speed.

In a traditional model, where data is summarized later, the automaker might run simulations and then analyze the data later. One reason this is sometimes true has to do with the scope of the needs for data discovery and reporting. Some companies just can’t manage and maintain the infrastructure required for real-time data analytics. That’s another major advantage to use Amazon Kinesis because the service can scale according to your real-time data analysis needs. There’s no need to build-out the infrastructure first or to scale p or down for each new project.

This also helps tremendously with the costs. Typically, real-time data analytics is complex, difficult to manage, and requires high-end servers, networks, and vast amounts of online storage but Amazon Kinesis can scale according to the project needs and scope. Yet the costs are related to the actual analytics you perform. With traditional batch processing for data analytics, one that is housed and analyzed in a traditional on-premise data center, that’s just not the case.

Posted in Uncategorised