Enhance code review and approval efficiency with generative AI using Amazon Bedrock

In the world of software development, code review and approval are important processes for ensuring the quality, security, and functionality of the software being developed. However, managers tasked with overseeing these critical processes often face numerous challenges, such as the following:

  • Lack of technical expertise – Managers may not have an in-depth technical understanding of the programming language used or may not have been involved in software engineering for an extended period. This results in a knowledge gap that can make it difficult for them to accurately assess the impact and soundness of the proposed code changes.
  • Time constraints – Code review and approval can be a time-consuming process, especially in larger or more complex projects. Managers need to balance between the thoroughness of review vs. the pressure to meet project timelines.
  • Volume of change requests – Dealing with a high volume of change requests is a common challenge for managers, especially if they’re overseeing multiple teams and projects. Similar to the challenge of time constraint, managers need to be able to handle those requests efficiently so as to not hold back project progress.
  • Manual effort – Code review requires manual effort by the managers, and the lack of automation can make it difficult to scale the process.
  • Documentation – Proper documentation of the code review and approval process is important for transparency and accountability.

With the rise of generative artificial intelligence (AI), managers can now harness this transformative technology and integrate it with the AWS suite of deployment tools and services to streamline the review and approval process in a manner not previously possible. In this post, we explore a solution that offers an integrated end-to-end deployment workflow that incorporates automated change analysis and summarization together with approval workflow functionality. We use Amazon Bedrock, a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. With the Amazon Bedrock serverless experience, you can get started quickly, privately customize FMs with your own data, and integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.

Solution overview

The following diagram illustrates the solution architecture.

Architecture Diagram

The workflow consists of the following steps:

  1. A developer pushes new code changes to their code repository (such as AWS CodeCommit), which automatically triggers the start of an AWS CodePipeline deployment.
  2. The application code goes through a code building process, performs vulnerability scans, and conducts unit tests using your preferred tools.
  3. AWS CodeBuild retrieves the repository and performs a git show command to extract the code differences between the current commit version and the previous commit version. This produces a line-by-line output that indicates the code changes made in this release.
  4. CodeBuild saves the output to an Amazon DynamoDB table with additional reference information:
    1. CodePipeline run ID
    2. AWS Region
    3. CodePipeline name
    4. CodeBuild build number
    5. Date and time
    6. Status
  5. Amazon DynamoDB Streams captures the data modifications made to the table.
  6. An AWS Lambda function is triggered by the DynamoDB stream to process the record captured.
  7. The function invokes the Anthropic Claude v2 model on Amazon Bedrock via the Amazon Bedrock InvokeModel API call. The code differences, together with a prompt, are provided as input to the model for analysis, and a summary of code changes is returned as output.
  8. The output from the model is saved back to the same DynamoDB table.
  9. The manager is notified via Amazon Simple Email Service (Amazon SES) of the summary of code changes and that their approval is required for the deployment.
  10. The manager reviews the email and provides their decision (either approve or reject) together with any review comments via the CodePipeline console.
  11. The approval decision and review comments are captured by Amazon EventBridge, which triggers a Lambda function to save them back to DynamoDB.
  12. If approved, the pipeline deploys the application code using your preferred tools. If rejected, the workflow ends and the deployment does not proceed further.

In the following sections, you deploy the solution and verify the end-to-end workflow.

Prerequisites

To follow the instructions in this solution, you need the following prerequisites:

Bedrock Model Access

Deploy the solution

To deploy the solution, complete the following steps:

  1. Choose Launch Stack to launch a CloudFormation stack in us-east-1:
    Launch Stack
  2. For EmailAddress, enter an email address that you have access to. The summary of code changes will be sent to this email address.
  3. For modelId, leave as the default anthropic.claude-v2, which is the Anthropic Claude v2 model.

Model ID Parameter

Deploying the template will take about 4 minutes.

  1. When you receive an email from Amazon SES to verify your email address, choose the link provided to authorize your email address.
  2. You’ll receive an email titled “Summary of Changes” for the initial commit of the sample repository into CodeCommit.
  3. On the AWS CloudFormation console, navigate to the Outputs tab of the deployed stack.
  4. Copy the value of RepoCloneURL. You need this to access the sample code repository.

Test the solution

You can test the workflow end to end by taking on the role of a developer and pushing some code changes. A set of sample codes has been prepared for you in CodeCommit. To access the CodeCommit repository, enter the following commands on your IDE:

git clone 
cd my-sample-project
ls

You will find the following directory structure for an AWS Cloud Development Kit (AWS CDK) application that creates a Lambda function to perform a bubble sort on a string of integers. The Lambda function is accessible via a publicly available URL.

.
├── README.md
├── app.py
├── cdk.json
├── lambda
│ └── index.py
├── my_sample_project
│ ├── __init__.py
│ └── my_sample_project_stack.py
├── requirements-dev.txt
├── requirements.txt
└── source.bat

You make three changes to the application codes.

  1. To enhance the function to support both quick sort and bubble sort algorithm, take in a parameter to allow the selection of the algorithm to use, and return both the algorithm used and sorted array in the output, replace the entire content of lambda/index.py with the following code:
# function to perform bubble sort on an array of integers
def bubble_sort(arr):
    for i in range(len(arr)):
        for j in range(len(arr)-1):
            if arr[j] > arr[j+1]:
                arr[j], arr[j+1] = arr[j+1], arr[j]
    return arr

# function to perform quick sort on an array of integers
def quick_sort(arr):
    if len(arr) <= 1:
        return arr
    else:
        pivot = arr[0]
        less = [i for i in arr[1:] if i <= pivot]
        greater = [i for i in arr[1:] if i > pivot]
        return quick_sort(less) + [pivot] + quick_sort(greater)

# lambda handler
def lambda_handler(event, context):
    try:
        algorithm = event['queryStringParameters']['algorithm']
        numbers = event['queryStringParameters']['numbers']
        arr = [int(x) for x in numbers.split(',')]
        if ( algorithm == 'bubble'):
            arr = bubble_sort(arr)
        elif ( algorithm == 'quick'):
            arr = quick_sort(arr)
        else:
            arr = bubble_sort(arr)

        return {
            'statusCode': 200,
            'body': {
                'algorithm': algorithm,
                'numbers': arr
            }
        }
    except:
        return {
            'statusCode': 200,
            'body': {
                'algorithm': 'bubble or quick',
                'numbers': 'integer separated by commas'
            }
        }
  1. To reduce the timeout setting of the function from 10 minutes to 5 seconds (because we don’t expect the function to run longer than a few seconds), update line 47 in my_sample_project/my_sample_project_stack.py as follows:
timeout=Duration.seconds(5),
  1. To restrict the invocation of the function using IAM for added security, update line 56 in my_sample_project/my_sample_project_stack.py as follows:
auth_type=_lambda.FunctionUrlAuthType.AWS_IAM
  1. Push the code changes by entering the following commands:
git commit -am 'added new changes for release v1.1'
git push

This starts the CodePipeline deployment workflow from Steps 1–9 as outlined in the solution overview. When invoking the Amazon Bedrock model, we provided the following prompt:

Human: Review the following "git show" output enclosed within  tags detailing code changes, and analyze their implications.
Assess the code changes made and provide a concise summary of the modifications as well as the potential consequences they might have on the code's functionality.

{code_change}


Assistant:

Within a few minutes, you will receive an email informing you that you have a deployment pipeline pending your approval, the list of code changes made, and an analysis on the summary of changes generated by the model. The following is an example of the output:

Based on the diff, the following main changes were made:

1. Two sorting algorithms were added - bubble sort and quick sort.
2. The lambda handler was updated to take an 'algorithm' query parameter to determine which sorting algorithm to use. By default it uses bubble sort if no algorithm is specified. 
3. The lambda handler now returns the sorting algorithm used along with the sorted numbers in the response body.
4. The lambda timeout was reduced from 10 mins to 5 seconds. 
5. The function URL authentication was changed from none to AWS IAM, so only authenticated users can invoke the URL.

Overall, this adds support for different sorting algorithms, returns more metadata in the response, reduces timeout duration, and tightens security around URL access. The main functional change is the addition of the sorting algorithms, which provides more flexibility in how the numbers are sorted. The other changes improve various non-functional attributes of the lambda function.

Finally, you take on the role of an approver to review and approve (or reject) the deployment. In your email, there is a hyperlink that will bring you to the CodePipeline console for you to input your review comments and approve the deployment.

Approve Pipeline

If approved, the pipeline will proceed to the next step, which deploys the application. Otherwise, the pipeline ends. For the purpose of this test, the Lambda function will not actually be deployed because there are no deployment steps defined in the pipeline.

Additional considerations

The following are some additional considerations when implementing this solution:

  • Different models will produce different results, so you should conduct experiments with different foundation models and different prompts for your use case to achieve the desired results.
  • The analyses provided are not meant to replace human judgement. You should be mindful of potential hallucinations when working with generative AI, and use the analysis only as a tool to assist and speed up code review.

Clean up

To clean up the created resources, go to the AWS CloudFormation console and delete the CloudFormation stack.

Conclusion

This post explores the challenges faced by managers in the code review process, and introduces the use of generative AI as an augmented tool to accelerate the approval process. The proposed solution integrates the use of Amazon Bedrock in a typical deployment workflow, and provides guidance on deploying the solution in your environment. Through this implementation, managers can now take advantage of the assistive power of generative AI and navigate these challenges with ease and efficiency.

Try out this implementation and let us know your thoughts in the comments.


About the Author

Profile PicXan Huang is a Senior Solutions Architect with AWS and is based in Singapore. He works with major financial institutions to design and build secure, scalable, and highly available solutions in the cloud. Outside of work, Xan spends most of his free time with his family and getting bossed around by his 3-year-old daughter. You can find Xan on LinkedIn.

View Original Source (aws.amazon.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: AWS Machine Learning