Results

AWS, ServiceNow ink 5-year collaboration pact

ServiceNow applications will be available on Amazon Web Services Marketplace as part of a five-year strategic collaboration agreement between the two companies. ServiceNow and AWS will also co-develop AI business applications focused on industries.

The deal, which starts in early 2024, rhymes with AWS' pact with Salesforce. AWS is building out its AWS Marketplace by listing go-to SaaS providers. ServiceNow's applications and co-developed services will be hosted on AWS.

On the generative AI front, ServiceNow can give joint customers the ability to co-mingle its proprietary large language models with others offered via Amazon Bedrock. The two companies said they will focus on use cases in manufacturing, supply chain, call centers and cloud transformation.

ServiceNow reports strong Q3, ups outlook for 2023 | ServiceNow's latest Now Assist generative AI features highlight its strategy | ServiceNow CEO McDermott talks business transformation, generative AI, processes

Integrations planned by ServiceNow and AWS include:

  • ServiceNow Customer Service Management (CSM) will be integrated with Amazon Connect to enhance workflows for case management. ServiceNow Now Assist and Amazon's AI and machine learning technologies can be used for sentiment analysis, conversation recaps and contextual data.
  • The two companies will create a cloud center of excellence with the ServiceNow platform to identify workloads to move to AWS. ServiceNow's Technology Workflows will recommend operational workflows.
  • AWS and ServiceNow will develop an automotive manufacturing platform for building, maintenance and repair data throughout the vehicle lifecycle. The two companies will combine ServiceNow's workflows and AWS data lakes. This platform will be developed with a "leading automotive manufacturer."
  • ServiceNow said it will integrate the Now Platform with AWS Supply Chain to create a "forward looking" supply chain management set of tools for forecasting, inventory, sustainability, and automation.

Here's the week from the Constellation Research team at re:Invent.

Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity servicenow amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

These overlooked AWS re:Invent launches could solve pain points

AWS re:Invent is an overwhelming barrage of features, services and launches that fly by so fast you can miss a lot of things that could drive real business value.

To that end, here are a some of the announcements that team Constellation Research thought were interesting even if they didn't get all the attention that Amazon Q, SageMaker, Graviton, Trainium and Inferentia get. These items were dropped by AWS CEO Adam Selipsky in passing while others hit the wires ahead of the lead keynote.

Data sovereignty requirements. Before AWS re:Invent kicked off at scale, AWS outlined AWS Control Tower, a set of 65 controls to meet data sovereignty requirements, which requires enterprises to control where data resides and flows.  AWS Control Tower offers a consolidated view of the controls enabled, your cmpliance status, and controls evidence across your multiple accounts. 

Constellation Research analyst Dion Hinchcliffe said:

"This announcement is very significant for large enterprises operating across different countries. Cloud is a real challenge for large multinational organizations. These new controls are vital for them to stay in compliance with data residency and other requirements. The support for multi-account controls is particularly noteworthy."

More from re:Invent:

Zero-ETL integrations. Selipsky quipped that the audience winced in unison when he said ETL (extract, transform and load), but in the future zero-ETL will be a reality. ETL is a major enterprise pain point and while zero-ETL integrations will garner yawns over headlines what AWS announced could be promising for enterprises. I'm betting that there enough enterprise buyers that'll care about zero ETL to throw you a few links about integrations across AWS data stores.

Constellation Research analyst Doug Henschen knows the ETL pain. He said:

"The idea of Zero-ETL is compelling because it promises considerable savings in time, effort and administrative headaches over ETL development work. It promotes low-latency insight while also reducing ETL processing and development costs. The Zero-ETL service will clearly introduce its own costs, but the time and labor savings are compelling. As for the DynamoDB to OpenSearch integration, this will enable data from massive, customer-facing DynamoDB-based transactional deployments to be quickly available to OpenSearch full-text search, fuzzy search, auto-complete, and vector search for machine learning (ML) capabilities. Talking to AWS executives it’s pretty clear a future step might be using the Zero-ETL capability to do reverse ETL from Redshift back into operational databases such as the various flavors of Aurora, RDS and DynamoDB."

    Amazon DataZone. Like ETL, data cataloging isn't a lot of fun either. Anyone in the data trenches knows that it's difficult to provide context around organizational data. The process of data cataloguing matters and anything that cuts down on the labor will be welcomed by enterprises.

    Henschen, who penned a report on the importance of data cataloging, added:

    "The use of ML/AI for augmented cataloging is pervasive among metadata management, cataloging and governance platforms, with examples including Alation, Collibra, Microsoft Purview and Google Dataplex. What's novel here is application of GenAI, which is an obvious next step that multiple vendors are either previewing or adding to their roadmaps. Given that everything is in preview, it's hard to say whether anybody has an edge in using GenAI at this point. DataZone is in the early days of its adoption by AWS customers, so anything it can do to remove friction from using the service will help to promote wider adoption." 

    Amazon Q Code Transformation. This announcement was dropped during the keynote but may have been lost. Amazon Q, a generative AI assistant that runs horizontally across AWS' portfolio, can be used to upgrade Java applications quickly. Amazon Q Code Transformation will analyze existing code, generate a transformation plan and complete tasks. Given how much enterprises need to update and transform code, Amazon Q Code Transformation is worth a look. In a blog post, AWS said:

    "Previously, developers could spend two to three days upgrading each application. Our internal testing shows that the transformation capability can upgrade an application in minutes compared to the days or weeks typically required for manual upgrades, freeing up time to focus on new business requirements. For example, an internal Amazon team of five people successfully upgraded one thousand production applications from Java 8 to 17 in 2 days. It took, on average, 10 minutes to upgrade applications, and the longest one took less than an hour."

    Amazon launches WorkSpaces Thin Client. This announcement received some press play but felt very retro. Thin clients?!? The economics of thin clients have made sense for a while. Adoption has been another story. WorkSpaces Thin Client will cost $195, be centrally managed and give access to Amazon WorkSpaces, WorkSpaces Web or Amazon AppStream 2.0, which provides wider access to applications. Thin clients solve a pain point and it'll be interesting to see if AWS gets traction.

    Amazon EC2 high memory U7i instances for in-memory databases. These instances are in preview and designed to support large, in-memory databases including SAP HANA, Oracle, and SQL Server. Given that many enterprises are moving to SAP HANA, these instances are worth a look.

     

    Data to Decisions Tech Optimization amazon Chief Information Officer

    HPE sees Q4 strength in AI, edge, high performance computing

    Hewlett Packard Enterprise saw strong intelligent edge and high-performance computing and AI revenue growth  in the fourth quarter, but its legacy compute and storage businesses struggled.

    In the fourth quarter, HPE reported earnings of 49 cents a share on revenue of $7.4 billion, down 7% from a year ago. Non-GAAP earnings for the fourth quarter were 52 cents a share. Wall Street was expecting fourth quarter earnings of 50 cents a share on revenue of $7.55 billion.

    For fiscal 2023, HPE delivered earnings of $1.54 a share on revenue of $29.1 billion, up 2% from a year ago. Non-GAAP earnings of $2.15 a share for fiscal 2023 were at the high range of guidance given at HPE's annual analyst meeting in October.

    HPE CEO Antonio Neri said: "As we continue to capitalize on growing market opportunities – particularly as customer interest in AI continues to explode – I am confident in our ability to deliver substantial returns to our shareholders."

    "CFO Jeremy Cox said HPE was seeing "promising indicators of continued demand in the areas of the market we are prioritizing, especially in AI."

    On a conference call, Neri said:

    "Even against an uncertain macroeconomic backdrop, we saw continued though uneven, demand across our HPE portfolio with a significant acceleration in AI orders. Demand in our AI solutions is exploding. We saw a significant uptick in customer demand in recent quarters for accelerated computing infrastructure and services. In Q4, orders for servers that include accelerated processing units or APUs represented 32% of our total server order mix, up more than 250% from the beginning of fiscal year 2023. APUs, which includes GPU-based servers orders across our business, represented 25% of our total server order mix in fiscal year 2023."

    As for the outlook, HPE said first quarter revenue will be between $6.9 billion and $7.3 billion and reiterated fiscal 2024 sales growth between 2% to 4% in constant currency. First quarter non-GAAP earnings will be in the range of 42 cents a share to 50 cents a share.

    Fiscal 2024 non-GAAP earnings will be between $1.82 a share to $2.02 a share.

    HPE also reiterated annual recurring revenue growth of 35% to 45% from fiscal 2022 to fiscal 2026.

    Data to Decisions Tech Optimization HPE greenlake SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service Chief Information Officer

    Workday Q3 shows strength, raises outlook

    Workday reported better-than-expected third quarter earnings and raised its outlook for the fiscal year.

    The cloud HR and finance application company reported third quarter earnings of 43 cents a share on revenue of $1.87 billion, up 16.7% from a year ago. Subscription revenue for the quarter was up 18.1% from a year ago. Non-GAAP earnings were $1.53 a share.

    Wall Street was expecting Workday to report third quarter non-GAAP earnings of $1.41 a share on revenue of $1.85 billion.

    Carl Eschenbach, co-CEO of Workday, said the company was seeing momentum from "AI innovation, strength in full platform deals, expanding partner ecosystem, and international growth." Aneel Bhusri, co-CEO of Workday, said the company's strategy to build AI into its core products is resonating with customers.

    Workday's approach to generative AI revolves around building it into its core products instead of going the add-on route that has been popular with vendors. Workday passed the 5,000 core HCM customers in the third quarter. Company executives say Workday is playing the long game and aiming to take advantage as enterprises move to consolidate vendors. 

    On a conference call with analysts, Eschenbach said:

    "Generative AI is becoming a business imperative. As a trusted partner and a market leader with over 65 million users under contract we can uniquely drive efficiencies and improve the employee experience. What we are doing and not just saying is resonating with our customers. Simply put, our value proposition has never been so relevant and powerful."

    As for the outlook, Workday projected fiscal 2024 subscription revenue of $6.598 billion, up 19% from a year ago. Non-GAAP operating margins will come in at 23.8%, which is higher than expectations.

    Bhusri said:

    "We're infusing generative AI into our platform is through our investment in conversational AI. While we are still in the exploratory phase with this technology. We believe conversational AI will fundamentally change how users interact with Workday. By enabling them to easily surface information they need and interact with data through simple conversation. We're also leveraging generative AI to create a conversational experience for Workday Adaptive Planning customers. The use of conversational text will simplify the process of surfacing key planning insights. Enabling users to make quicker, more strategic decisions about their businesses."

    Eschenbach had some interesting comments on how customers are thinking about AI and factoring it into their evaluations. 

    At this point, I don't think people are making decisions yet, just purely on AI. I think it's something that every customer looks at to make sure that they're going to be covered with a new deployment or a customer knowing that Workday has them in a strong place, but they're still looking first and foremost at running their business and moving off of crappy legacy applications into the cloud. And we're unmatched in that category. And then when we add the AI stuff, I think it just checks that AI box.

    But, I would say that despite all the hype, it's still in the early days of actual large scale deployments of AI in HR and finance, we're ready."

    More:

    Future of Work Data to Decisions Tech Optimization Innovation & Product-led Growth Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity workday AI Analytics Automation CX EX Employee Experience HCM Machine Learning ML SaaS PaaS Cloud Digital Transformation Enterprise Software Enterprise IT Leadership HR GenerativeAI LLMs Agentic AI Disruptive Technology Chief Financial Officer Chief People Officer Chief Information Officer Chief Customer Officer Chief Human Resources Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    AWS launches Amazon Q, makes its case to be your generative AI stack

    Amazon Web Services made the case at re:Invent that it should be your complete AI stack with Amazon Q, a horizontal generative AI tool that will be embedded throughout AWS and backed up with Amazon Bedrock and infrastructure for model training and inference powered by Trainium and Inferentia processors.

    The catch? AWS' strategy rhymes with Microsoft's copilot everywhere plans as well as Google Cloud's Duet AI plans. The interesting twist for AWS is that it's more horizontal across the platform instead of focused on one application, use case or role. AWS also had a big developer spin on Amazon Q, which is seen as an ally to developers because it can connect code and infrastructure.  

    For enterprises, the big question is whether they will go with one AI stack or multiple. And is that decision made by CXOs or developers? Time will tell, but for now the biggest takeaway from re:Invent is that AWS is taking its entire portfolio generative AI with a bevy of additions that are in preview.

    Adam Selipsky, CEO of AWS, said during his re:Invent keynote AWS has been leveraging AI for decades and now plans to reinvent generative AI. "We're ready to help you reinvent with generative AI," said Selipsky, who said experimentation is turning into real world productivity gains.

    Selipsky said AWS is investing in all layers of the generative AI stack. "It's still early days and customers are finding different models work better than others," said Selipsky. "Things are moving so fast that the ability to adapt is best capability you can have. There will be multiple models and switch between them and combine them. You need a real choice."

    In a veiled reference to OpenAI, Selipsky said "the events of the last few days illustrate why model choice matters."

    Here's a look at the moving parts of AWS' generative AI strategy as outlined by Selipsky and other executives.

    Top layer of AWS' AI stack is Q

    Amazon Q is billed as "a new type of generative AI assistant that's an expert in your business."

    AWS' answer to Microsoft's Azure's copilot everywhere theme is Amazon Q. According to AWS, Q will engage in conversation to solve problems, generate content and then take action. It'll know your code, personalize interactions by role and permissions and be built to be private.

    The vision for Amazon Q is to do everything from answering software development questions to be a resource for HR to monitor and enhance customer experiences. The most interesting theme is that AWS is playing to its strengths--code and infrastructure--and using Q to connect the dots.

    "AWS had built a bevy of cross services capabilities and Amazon Q is the next evolution," said Constellation Research analyst Holger Mueller.

    Mueller said:

    "What sets Q apart is that it has a shot to provide one single assistant across all of AWS services. Q reduces one of the main challenges of AWS--the complexity introduced by the thousands of services offered. But it is only possible because Amazon has been working on the integration layer across its services, starting from access and security over an insights layer, one foundation with SageMaker for all AI and e.g. Data Zone. It can now collect the benefit and has a shot at redefining the future of work. Partnerships with SAP, Salesforce, Workday and more ERP vendors make it easily the Switzerland of GenAI assisted work in the multi-vendor enterprise.”

    Doug Henschen, Constellation Research analyst, crystallized the significance of Amazon Q. He said:

    "Amazon Q was the most broadly compelling and exciting GenAI announcement during Adam Selipsky’s keynote. Amazon Q is an AI assistant that will engage in conversations based on understanding of  company-specific information, code, and technology systems. The promise is personalized interactions based on user-specific roles and permissions. AWS previously offered QuickSight Q for natural language querying within its QuickSight analytics platform, but Amazon Q is single, all-purpose GenAI assistant. The idea is to deliver a AI assistant that will understand the context of data, text, and code. At this point they have some 40 connectors to popular enterprise systems outside of AWS, such as Office 365, Google cloud apps, Dropbox and more. The promise is nuanced, contextual understanding of what users are seeking when they ask questions in natural language."  

    Here's how Amazon Q will be leveraged:

    • Developers and builders will use Amazon Q to architect, troubleshoot and optimize code, develop features and transform code based on 17 years of AWS knowledge.
    • For lines of business Amazon Q is about getting answers to questions with access controls and complete actions.
    • Specialists will get Amazon Q in QuickSight, Connect and Supply Chain.

    Selipsky said AI chat applications are falling short because they're in silos by use cases and applications. "They don't know your business or how you operate securely," said Selipsky. "Amazon Q is designed to work for you at work. We've designed Q to meet your stringent enterprise requirements from day one."

    What's interesting for enterprise buyers is the approach of Amazon Q. Does a horizontal approach to generative AI across multiple functions curb model sprawl?

    Amazon Bedrock: More choices, more relevancy after OpenAI fiasco?

    Amazon added new models to Bedrock, added secure customization and fine tuning of models, agents to complete tasks, automated model evaluation tools as well as knowledge bases.

    Specifically, new models added to Bedrock include:

    • Claude 2.1;
    • Llama 2 Chat with 13 billion and 70 billion parameters;
    • Stable Diffusion XL;
    • Titan models focused on generating images and multimodel embeddings;
    • Command Light & Embed.
    • Fine tuning will be available for Meta Llama 2, Cohere Command and Titan with Claude 2 coming soon.

    Selipsky said enterprises need to orchestrate between models and get foundational models to take actions.

    To support this level, AWS has integrated AI capabilities into its data services such as Redshift and Aurora. Redshift Serverless will get a bevy of AI optimizations. AI will also be used to simplify application management on AWS.

    The breakdown of the data pieces underpinning the broader AI strategy for AWS include:

    • Amazon Aurora Limitless Database, which supports automated horizontal scaling of write capacity. Constellation Research analyst Doug Henschen noted "Aurora Limitless Database is an important step forward in potentially matching rivals such as Oracle on automated scalability."
    • Redshift Serverless AI Optimizations, which brings machine learning scaling and optimization to AWS analytical database for data warehousing. Henschen said "matching rivals by adding sophisticated, ML-based scaling and optimization capabilities with cost guardrails will make Redshift more efficient and performant as well as even more cost competitive." 
    • AWS Introduces Two Important Database Upgrades at Re:Invent 2023

    Trainium, Inferentia with a dash of SageMaker

    AWS launched the latest versions of its Trainium and Inferentia processors, two GPUs that may be able to bring the price of model training down. Today, AI workloads are dominated by Nvidia and AMD is entering the market.

    However, AWS has had GPUs in the market and has been able to acquire workloads for enterprises that may not need Nvidia's horsepower. Here's the breakdown:

    • For training generative AI models, AWS launched Trainium2, which is 4x faster than its previous version and operates at 65 exaflops.
    • For using generative AI models and inference, AWS launched Inferentia2, which has 4x the throughput of the previous version and 10x lower latency.
    • Riding on top of these processors is SageMaker HyperPod, which can reduce the time to train foundation models by up to 40%. AWS said SageMaker HyperPod can distribute model training in parallel with 1000s of accelerators, automatic checkpointing and resiliency.

    AWS also added Inference Optimization to SageMaker and said the new service can reduce foundation model deployment cost by an average of 50% with intelligent routing, scaling policies and better efficiency by deploying multiple models to the same instance.

    Bottom line: AWS made its case to be the generative AI stack for enterprises, but Henschen noted there were a lot of things in preview. One thing is clear: Selipsky's not-so-veiled references to Microsoft Azure and OpenAI illustrate that the narrative gloves are coming off.

    Data to Decisions Innovation & Product-led Growth Future of Work Tech Optimization Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Disruptive Technology Chief Information Officer Chief Executive Officer Chief Technology Officer Chief AI Officer Chief Data Officer Chief Analytics Officer Chief Information Security Officer Chief Product Officer

    AWS presses custom silicon edge with Graviton4, Trainium2 and Inferentia2

    Amazon Web Services launched Graviton4, its custom chip for multiple workloads, with big improvements over last year's Graviton3. AWS also launched the latest versions of its Trainium and Inferentia processors, two GPUs that may be able to bring the price of model training down.

    The takeaway: AWS plans to push its custom silicon cadence to gain more workloads even as it partners with big guns such as Nvidia, Intel and AMD.

    Graviton4 is billed as AWS' "most powerful and energy efficient chip that we have ever built. The launch of the chip also shows a faster cadence for AWS' processors. Graviton launched in 2018 with Graviton2 following up two years later. Graviton3 launched last year.

    According to AWS, Graviton4 is 30% faster than Graviton3, 30% faster for web applications and 40% faster for database applications. AWS has 150 different Graviton-powered Amazon EC2 instance types globally, has built more than 2 million Graviton processors, and has more than 50,000 customers including Datadog, DirecTV, Discovery, Formula 1 (F1), NextRoll, Nielsen, Pinterest, SAP, Snowflake, Sprinklr, Stripe and Zendesk.

    Hyperscalers are racing to create custom processors that offer an option for enterprises to cut compute costs. While most of the focus is on model training and inferencing, AWS custom processor strategy is wider. In addition to Graviton, AWS launched Trainium and Inferentia aimed at AI workloads.

    Adam Selipsky, CEO of AWS, said during his re:Invent keynote that Graviton is an effort to lower the cost of cloud compute. "We have more than 50,000 customers for Graviton," said Selipsky, who cited SAP as a key customer. "Other cloud providers have not delivered on their first server processors," he said.

    Graviton4 will power R8g instances for EC2 with more instances planned. R8g instances are in preview.

    Juergen Mueller, CTO of SAP, said Graviton-based EC2 instances have provided a 35% bump in price performance for analytical workloads. SAP will be validating Graviton4 performance. 

    In addition, AWS launched the latest versions of its Trainium and Inferentia processors, two GPUs that may be able to bring the price of model training down. Today, AI workloads are dominated by Nvidia and AMD is entering the market.

    However, AWS has had GPUs in the market and has been able to acquire workloads for enterprises that may not need Nvidia's horsepower. Here's the breakdown:

    • For training generative AI models, AWS launched Trainium2, which is 4x faster than its previous version and operates at 65 exaflops.
    • For using generative AI models and inference, AWS launched Inferentia2, which has 4x the throughput of the previous version and 10x lower latency.

    "We need to keep pushing on price/performance on training and inference," said Selipky, who again referenced that other cloud providers were behind on custom silicon. Microsoft announced its AI processors at Ignite 2023

    Constellation Research analyst Holger Mueller said:

    "AWS pushes its custom siliiicon with version 2 on Trainium and Inferentia chips, as well as its fundamental Graviton chip. When a large ISV like SAP moves 4M lines of code and sees cost savings it is something to take note of. Equally the sustainability aspect is key for the next version of custom silicon--it saves real money."

    During the keynote Selipsky was sure to note that Nvidia remains a key partner for AWS, which has a bevy of Nvidia GPU instances. Nvidia CEO Jensen Huang appeared on stage to outline the next phase of the partnership with AWS.

    Selipsky said AWS will add Nvidia DGX Cloud and latest GPUs to its platform. Huang said it will build its largest AI Foundry on AWS. Note that Huang also appeared with Microsoft CEO Satya Nadella to tout Nvidia’s partnership on Azure.

    However, AWS is looking to broaden GPU workloads and the bet is that it'll handle a lot of those on its own silicon.

     

    Data to Decisions Tech Optimization Innovation & Product-led Growth Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon SaaS PaaS IaaS Cloud Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP CCaaS UCaaS Collaboration Enterprise Service AI GenerativeAI ML Machine Learning LLMs Agentic AI Analytics Automation Chief Information Officer Chief Technology Officer Chief Information Security Officer Chief Data Officer Chief Executive Officer Chief AI Officer Chief Analytics Officer Chief Product Officer

    AWS Introduces Two Important Database Upgrades at Re:Invent 2023

    Monday night, Nov. 27, at Re:Invent 2023, AWS’ Peter DeSantis, SVP of Utility Computing, announced two important database features: Amazon Aurora Limitless Database and Redshift Serverless AI Optimizations. Here's my analysis.



    Amazon Aurora Limitless Database: Announced in private preview, This is an automated sharding feature for Aurora PostgreSQL that will enable customers to horizontally scale database write capacity via sharding. Aurora previously supported automated horizontal scaling of read capacity, but write capacity could only be automatically scaled vertically, by implementing larger and more powerful compute instances via the Aurora Serverless V2 feature. When customers reached the limit of vertical scaling, meaning they’ve already employed that most powerful compute instances, they would have to resort to sharding data across multiple database instances. This has been a common practice for large database deployments, but manual sharding at the application layer introduces complexity and administrative burdens. Aurora Limitless Database does away with these burdens by automating the sharding of data across database instances behind the scenes while ensuring transactional consistency.

    Doug’s take: Aurora Limitless Database will step up competition with the world’s number-one database and Aurora’s biggest competitive target, Oracle Database. AWS is actually playing catch up with this feature, as Oracle introduced automated sharding back in 2017. Nonetheless, given that Aurora is so cost competitive, touted as one tenth the cost of its rivals, Aurora Limitless Database is an important step forward in potentially matching rivals such as Oracle on automated scalability.



    Redshift Serverless AI Optimizations: In another move to match competitors, AWS introduced Amazon Redshift Serverless AI Optimizations. This feature brings ML-based scaling and optimization to AWS’s flagship analytical database for data warehousing. AWS introduced Redshift Serverless in 2021 in order to automate database scaling, but the capability was reactive. Given the time it takes to add new instances up in running, there were sometimes penalties in performance. The AI Optimizations feature introduces a new, machine learning-powered forecasting model that does a better job of forecasting the capacity requirements of existing as well as new and unfamiliar queries. A simple slider controls is said to enable administrators to set the balance between maximizing performance and minimizing cost.

    Doug’s take: ML-based forecasting and optimization is old hat in the world of data warehousing, implemented by the likes of Oracle and Snowflake. Here, too, AWS is playing catchup, but the appeal of Redshift is as a cost-competitive data warehousing option within the AWS ecosystem. Matching rivals by adding sophisticated, ML-based scaling and optimization capabilities with cost guardrails will make Redshift more efficient and performant as well as more cost competitive.

    Related resources:
    Google Sets BigQuery Apart With GenAI, Open Choices, and Cross-Cloud Querying
    Salesforce Data Cloud Emerges as an Obvious Choice for CRM Customers
    How Data Catalogs Will Benefit From and Accelerate Generative AI
     

    Data to Decisions Tech Optimization Big Data ML Machine Learning LLMs Agentic AI Generative AI AI Analytics Automation business Marketing SaaS PaaS IaaS Digital Transformation Disruptive Technology Enterprise IT Enterprise Acceleration Enterprise Software Next Gen Apps IoT Blockchain CRM ERP finance Healthcare Customer Service Content Management Collaboration Chief Information Officer Chief Analytics Officer Chief Data Officer Chief Technology Officer Chief Information Security Officer

    Alianza, AWS team up for cloud communication services

    Alianza and Amazon Web Services (AWS) signed a multi-year partnership to enable traditional communication service providers to deliver and monetize voice and cloud communications services.

    The deal, outlined at AWS re:Invent, also highlights how AWS partners to take workloads in key markets and verticals. Alianza's Dag Peak, Chief Product Officer said the joint Alianza-AWS combination has been deployed in more than 100 communications service providers (CSPs). These CSPs, which include Lumen, Brightspeed and Viasat, are moving from traditional voice networks to more nimble cloud platforms.

    Alianza, which raised $61 million in financing Oct. 31, offers a cloud communications platform for service providers that replaces legacy systems so providers can provide cloud meetings, collaboration and other digital services.

    According to the companies the combination of Alianza and AWS will include the following:

    • Lower costs and simplified operations by replacing soft switch voice over IP networks and legacy hardware with unified communications as a service platform.
    • Improved customer service via digital automation and control over customer experiences.
    • The ability to launch new services built on Alianza and AWS.
    • A unified view into operations via a software-as-a-service interface.
    • Upcoming tools so CSPs can offer new generative AI services via Amazon Bedrock and other AI services. 

    I caught up with Peak to talk about the CSP market and how it's migrating to the cloud.

    The market. Peak said the traditional telephony market is still large, but often forgotten as vendors have moved up the stack to communication and collaboration apps (think RingCentral, Zoom, Cisco Webex, Microsoft Teams). "In markets where we play well, we don't have many competitors. There's very little interest in smaller service providers," said Peak.

    The need for cloud platforms. CSPs need to move as they transform from phone companies to focusing more on communications. "These CSPs can offer a full stack of services all living in the cloud," said Peak. "AWS is interested in Alianza because our platform is allowing CSPs to offer a full stack of services in the cloud and it can pull in the workloads."

    Transformation. CSPs don't want to rebuild the traditional services, but modernize everything they are doing, said Peak. However, Peak noted that transformation for CSPs starts with traditional voice services with AI because many customers are mobile and don't want to communicate via apps. Alianza's services are delivered via CSPs to customers via an eSIM. "Not everyone is sitting at a computer. Small businesses rely on telephones," said Peak. "We want to enable AI for the hair salons, insurance agencies, and flower shops."

    Use cases. Peak said it's possible to bring tools like sentiment analysis and experience tracking to small businesses. "Contact centers get all the AI love, but we can use AI far down market with AWS integrations," said Peak. The goal is to bring enterprise grade AI services to mainstream businesses without an app.

    Next-Generation Customer Experience Data to Decisions Future of Work Innovation & Product-led Growth New C-Suite Sales Marketing Digital Safety, Privacy & Cybersecurity Chief Information Officer

    AWS bets palm reading will come to an enterprise near you

    Amazon Web Services launched Amazon One Enterprise, a palm-based identity service that aims to make palm-reading a mainstream way to enter buildings, improve security and verify credentials.

    AWS said Amazon One Enterprise is being used by Boon Edam, IHG Hotels and Resorts, Paznic, and KONE. The service is in preview in the US and pricing wasn't immediately available. Amazon One Enterprise's FAQ is worth checking out for various details on enrollment, security and device setup. 

    The company announced the launch at AWS re:Invent. AWS sees palm reading as a way to better secure and authorize access to physical locations such as data centers, offices, buildings, airports and hotels as well as a way to restrict software and document access.

    More from re:Invent:

    As for the potential returns on investment, the argument for Amazon One Enterprise is straightforward. Enterprises wouldn't have to create and manage badges, fobs and PINs and IT departments could install Amazon One devices. The help desk hours for lost security devices and PINs could justify a look at palm-screening methods.

    Here are the key points about Amazon One Enterprise:

    • It is a fully managed service via the AWS management console and a biometric identification device.
    • Security controls are built in to every stage of the service from the Amazon One device to data in transit and in the cloud. Palm images, metadata and user credentials are immediately encrypted. Each palm has its unique key.
    • AWS said the accuracy rate of the palm and vein imagery is 99.9999% and better than scanning two irises.
    • Amazon One uses AI and machine learning to associate a palm signature with credentials such as badge ID, employee ID or PIN.
    • Authentications, status and software updates and enrollment as well as analytics.
    • Amazon One Enterprise offers two options: A standalone device and a pedestal, where the Amazon One device is mounted on a pedestal.
    Future of Work Next-Generation Customer Experience Digital Safety, Privacy & Cybersecurity amazon Security Zero Trust Chief Information Officer Chief Information Security Officer Chief Privacy Officer

    AWS launches Braket Direct with dedicated quantum computing instances, access to experts

    Amazon Web Services rolled out Braket Direct, a service that allows researchers to procure dedicated private access to quantum processing units from providers such as IonQ, Oxford Quantum Circuits, QuEra, Rigetti, or Amazon Quantum Solutions Lab.

    The effort is part of Amazon Braket, AWS' quantum computing marketplace launched in 2020. Announced at AWS re:Invent, Braket Direct also has experts at the ready to give guidance on workloads and get access to features and devices. These experts offer free office hours and one-on-one reservation prep sessions.

    At a keynote Monday night, Peter DeSantis, Senior Vice President of AWS Utility Computing, said the issue AWS is trying to solve is that qubits are too noisy for workloads. In his keynote coverage, Constellation Research analyst Dion Hinchcliffe said:

    "This tech looks five years out at least, but they are clearly gearing up because the tech will be critical to tackle issues in scientific research, cryptography, pharmacology, and other domains."

    With Braket Direct, customers can reserve an entire quantum machine on IonQ Aria, QuEra Aquila, and Rigetti Aspen-M-3 devices for a period of time. Since the machines are completely dedicated, customers can run complex and time sensitive workloads or use the systems for training.

    Constellation Research analyst Holger Mueller said:

    "AWS kept its quantum plans under tight lid - saying it was only research and now DeSantis shows a brand new quantum chip. AWS is focussing on a new approach to qubit error correction - which will be promising for both for cost and performance of quantum machines." 

    Braket Direct customers can also access systems that have reduced or limited availability. AWS cited IonQ's 30-qubit Forte system as one of those devices. It remains to be seen whether Braket Direct can boost the revenue bases of pure play quantum computing vendors. For instance, Rigetti Computing's revenue for the nine months ended Sept. 30 was $8.63 million. IonQ revenue for the same time frame was $15.94 million. 

    Pricing varies, but an IonQ system will run you $7,000 an hour, Rigetti goes for $3,000 an hour and QuEra is $2,500 an hour. Expert advice offerings are billed separately. Hybrid jobs have a different pricing setup. Here's a look at the Braket Direct QPU pricing.

    Tech Optimization Data to Decisions Innovation & Product-led Growth amazon Quantum Computing Chief Information Officer Chief Technology Officer