Come Partner with Us

Customizing coding companions for organizations

Favorite Generative AI models for coding companions are mostly trained on publicly available source code and natural language text. While the large size of the training corpus enables the models to generate code for commonly used functionality, these models are unaware of code in private repositories and the associated coding

Read More
Shared by AWS Machine Learning November 10, 2023

Promote pipelines in a multi-environment setup using Amazon SageMaker Model Registry, HashiCorp Terraform, GitHub, and Jenkins CI/CD

Favorite Building out a machine learning operations (MLOps) platform in the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML) for organizations is essential for seamlessly bridging the gap between data science experimentation and deployment while meeting the requirements around model performance, security, and compliance. In order to

Read More
Shared by AWS Machine Learning November 10, 2023

Overcoming leakage on error-corrected quantum processors

Favorite Posted by Kevin Miao and Matt McEwen, Research Scientists, Quantum AI Team The qubits that make up Google quantum devices are delicate and noisy, so it’s necessary to incorporate error correction procedures that identify and account for qubit errors on the way to building a useful quantum computer. Two

Read More
Shared by Google AI Technology November 9, 2023

Optimize for sustainability with Amazon CodeWhisperer

Favorite This post explores how Amazon CodeWhisperer can help with code optimization for sustainability through increased resource efficiency. Computationally resource-efficient coding is one technique that aims to reduce the amount of energy required to process a line of code and, as a result, aid companies in consuming less energy overall.

Read More
Shared by AWS Machine Learning November 9, 2023

Harnessing the power of enterprise data with generative AI: Insights from Amazon Kendra, LangChain, and large language models

Favorite Large language models (LLMs) with their broad knowledge, can generate human-like text on almost any topic. However, their training on massive datasets also limits their usefulness for specialized tasks. Without continued learning, these models remain oblivious to new data and trends that emerge after their initial training. Furthermore, the

Read More
Shared by AWS Machine Learning November 8, 2023

Boosting sustainable solutions from Sweden

Favorite Today, we’re announcing the Swedish recipients of Google.org Impact Challenge: Tech for Social Good – receiving technical support and 3 million Euros in funding for char… View Original Source (blog.google/technology/ai/) Here.

Alternating updates for efficient transformers

Favorite Posted by Xin Wang, Software Engineer, and Nishanth Dikkala, Research Scientist, Google Research Contemporary deep learning models have been remarkably successful in many domains, ranging from natural language to computer vision. Transformer neural networks (transformers) are a popular deep learning architecture that today comprise the foundation for most tasks

Read More
Shared by Google AI Technology November 7, 2023