We are pleased to share that Vanderbilt has created a custom generative AI software called Amplify. The open-source platform is a secure generative AI sandbox that will allow users to access and provide customized instructions to the Large Language Model.
If you are a faculty or staff member with an immediate need for the tool, you can simply navigate to the login page and use single sign-on (SSO) to access. If you have questions regarding access to the platform, please reach out to the team at amplify@vanderbilt.edu.
Amplify FAQs
-
Amplify Basics
What is Amplify?
Amplify is Vanderbilt University's internal generative AI platform that leverages the same technology behind ChatGPT and similar AI models. It offers a secure, chat-based environment for work and research, ensuring that your data and chat history remain private and protected within Vanderbilt's internal infrastructure.
What makes Amplify different from public generative AI chat tools such as ChatGPT and others?
When using public generative AI chat tools, data you enter is automatically shared with the company making that tool and may be used in a variety of different ways. Amplify is an internal Vanderbilt tool with agreements with model providers to not use Vanderbilt data for model training, so the data entered into Amplify is kept safe and secure while still allowing access to the same AI models that power popular tools like ChatGPT, Claude, and others.
Who has access to Amplify?
As of April 2024, access to Amplify is limited to Vanderbilt faculty and staff. Faculty and staff are encouraged to request access to the tool by signing up for the Amplify pilot waiting list.
-
Data Security and Privacy
What type of data is safe to be put into Amplify?
Data security at Vanderbilt is based on a four-level classification system. Currently, Amplify is authorized for Level 1 and 2 data which includes public data and data that is private and should not be available to non-VU individuals without permission. More sensitive data that would fall under higher levels (such as data that must be kept confidential by contract or laws such as FERPA and HIPPA protected data) are not currently authorized for use in Amplify. However, clearance to input higher levels of data is on the roadmap for future updates to the platform. For more information on these data classifications, you can refer to the Data Classification Guidance from the Vanderbilt Office of Cybersecurity.
How secure is the data that I put into Amplify?
Just like the data stored in Vanderbilt systems like SharePoint, Box, Teams, university hardware, etc., any data that is entered into Amplify is kept within Vanderbilt systems and is not used by any AI companies like Microsoft, OpenAI, or any other non-Vanderbilt entity. The data that you enter into Amplify cannot be collected or used by any of these AI companies in order to train their models. Your conversation history is saved in your local device’s browser, and Vanderbilt does not store your conversations unless you 1) share a conversation or 2) save your workspace.
-
General Features
What does temperature mean in Amplify?
When using generative AI tools, temperature refers to how precise or how creative you want the output of the tool to be. Adjusting the temperature can help you tailor the output to meet your specific needs. You can select a temperature value between 0 and 1, and it affects how predictable or diverse the output will be.
- Low Temperature (Closer to 0): Setting the temperature closer to 0 will produce more deterministic and repetitive responses. At a low temperature, the model is more likely to select the most probable words or phrases given the context, leading to text that is often more focused and precise but less varied and creative.
- High Temperature (Closer to 1): As the temperature approaches 1, the model's responses become more stochastic or random. This means that less probable words or phrases have a higher chance of being selected, resulting in responses that are more diverse, creative, and less predictable.
When should I use the different temperatures?
Setting the temperature of a chat is primarily based on what your goal is with that conversation. If you're looking for precise, accurate information or responses, a lower temperature is generally more suitable. This might include technical explanations, specific instructions, or factual content. When your goal is to generate creative writing, brainstorming ideas, or you're looking for a variety of responses to explore different perspectives, a higher temperature value can help achieve that by introducing more novelty into the text. The temperature setting is a useful tool for experimentation and can serve as a way to adjust the model to more effectively adapt to your specific, unique use cases.
What AI models are available for use in Amplify?
Amplify provides access to a variety of Large Language Models. Currently, Amplify users have access to the following models:
What does selecting a model mean, and how do I pick between models?
Selecting a model refers to choosing a specific large language model from one of the various providers, such as OpenAI, Anthropic, Mistral, etc. Each model has unique capabilities and strengths, and the choice of model can impact the content, tone, and style of the output you receive. For instance, one model might excel at technical writing, while another might be better suited for creative storytelling. By selecting a model, you’re determining which AI's expertise aligns best with your needs for a particular project or query. A useful strategy for determining what model is best for your purpose is to test each model with various prompts to identify the strengths and weaknesses of the responses.
- GPT-3.5 can be useful for simpler queries where the latest information isn't crucial. As the same model available with a free ChatGPT account, GPT 3.5 is a cost-effective and less sophisticated alternative to GPT-4, balancing performance with affordability. It performs well with precise tasks, discussing general topics, brainstorming, drafting questions, organizing fundamental information, proposing innovative ideas, and offering recommendations. This model is trained on information available through January 2022.
- GPT-4 is good for complex tasks requiring advanced understanding. As GPT-4 is the model available to paid ChatGPT Plus users, it offers further advanced intelligence over its predecessors. GPT-4 can also carry out complex mathematical operations, code assistance, analyze intricate documents and datasets, demonstrates critical thinking, and in-depth context understanding. This model is trained on information available through April 2023.
- Claude 3 Haiku is useful for urgent tasks with near-instant responsiveness and emphasis on security and robustness through minimized risk of harmful outputs. It features speeds 3 times faster than its Claude peer models while being the most economical choice. Claude 3 Haiku is best for simple queries, lightweight conversation, rapid analysis of large volumes of data, and handling of much longer prompts. This model is trained on information available through August 2023.
- Claude 3 Sonnet can be a good option for complex tasks requiring advanced understanding and intelligence, while keeping cost low. Offers a better balance between cost, speed, and performance compared to previous Claude models. Claude 3 Sonnet can perform complex mathematical computations, statistical analyses, coding assistance, think critically, and maintain context understanding. This model is trained on information available through August 2023.
- Mistral 7B is a model that is good for pre-filtering or high-volume tasks. It reduces costs when used alongside more sophisticated models like GPT-4, Claude-3, or Mixtral-8x7B. Mistral 7B is best for text-related tasks such as summarization, classification, answering questions, creative content generation, and light dialogue. This model is trained on information available through September 2023.
- Mixtral 8x7b can be a good model for rapid processing and task-specific fine-tuning. It offers advanced intelligence, compared to its predecessors, at a low cost. Generally, Mixtral 8x7B is good for text summarization, question answering, text classification, code generation, and creative content generation. This model is trained on information available through September 2023.
- Mistral Large is a model that should be considered for complex tasks and advanced understanding without the need for recent knowledge. It offers a greater level of intelligence compared to its predecessors (Mistral 7B & Mixtral 8x7B). Mistral Large excels in complex reasoning, text understanding, transformation, code generation, and offers advanced capabilities for multilingual reasoning and analysis. This model is trained on information available through 2021.
How do I cite that I am using a large language model?
Policies around citing usage of generative AI tend to vary between disciplines. In general, both APA and MLA have guidance on how to cite usage of LLMs (APA citation guidelines or MLA citation guidelines). Additionally, different professional organizations and/or academic journals may have their own citation and disclosure policies. In general, it is good practice to cite and disclose all uses of generative AI while also being mindful of when use of generative AI is appropriate or not. For more guidance on using generative AI, visit Vanderbilt’s generative AI hub, which includes guidance on using generative AI for faculty and staff.
What file types can I upload to Amplify?
Currently Amplify works with most text-based files. While image and video files are not currently supported, the following file types are supported:
- Comma-Separated Values (.csv)
- Compressed file (.zip)
- Excel spreadsheets (.xlsx)
- Hypertext Markup Language - HTML (.html)
- JavaScript (.js)
- JSON format (.json)
- Markdown (.md)
- Plain text (.txt)
- Portable Document Format (.pdf)
- PowerPoint Presentation (.pptx)
- Python format (.py)
- Word documents (.docx)
-
Features Unique to Amplify
How can I manage files I’ve uploaded to Amplify?
In the Amplify chat bar, the Files icon allows you to access all files you have uploaded to Amplify in the past. This file manager allows users to access previously uploaded files without having to upload them for each new conversation.
Can I share and/or save conversations?
In Amplify, you can share conversations with other Amplify users. You can also download or export conversations so that you can save either a specific output or an entire conversation.
How does sharing work in Amplify?
Within Amplify, you can share items such as conversations, prompts, and custom instructions with other Amplify users. This option provides a great way for you to quickly share conversations with full chat history with other users to show what you have been working on with others. To find out more about how sharing works, including a step-by-step guide, refer to our in-depth guide linked here.
How do I download output from Amplify as a Word or PowerPoint file?
When using Amplify, you can download an entire chat conversation (which includes all your prompt inputs and received responses) or a single message. Anywhere you see a download icon, that output or conversation can be downloaded. Once you click the download icon, you can choose whether to download your conversation or message as a Word Document or PowerPoint. Selecting the PowerPoint option enables you to select and apply a design template to the PowerPoint, including a standard Vanderbilt University template.
How can I move items into folders?
You can drag items into folders by clicking a conversation in the left sidebar and dragging it into the folder you would like. Folders are organized alphabetically. You can also create a new folder by clicking the New Folder icon.
What is the difference between a folder of chats vs a folder of custom instructions?
Folders of chats are in the left sidebar and can only contain chats. Folders of custom instructions and prompt templates are located within the right sidebar and only contain custom instructions and prompt templates.
What is an Amplify Helper?
Tools within the Amplify Helper folder are prompt templates or custom instructions that the Amplify team considers helpful for Amplify users. They will auto-populate whenever you create a new workspace, and the contents of the Amplify helper folder may change over time.
What is a Prompt Template?
A Prompt Template is designed to streamline the process of using the same prompt or sharing a prompt with slight differences. A prompt template is a prompt with placeholders. You can select a prompt template, fill in the placeholders and run the prompt with the model of their choice. This is a great tool if you create a detailed prompt that you plan to use to complete the same tasks but focused on different content.
What are Custom Instructions?
Custom Instructions in Amplify are basically a set of rules at the start of a conversation designed to prime the AI tool to get a more focused and desired response. These instructions can include details about desired tone, style, or content, allowing you to customize the language model's responses to better suit their individual requirements. Examples of custom instructions could be ones focused on turning data into a visualization or taking detailed text and reformatting it into a PowerPoint presentation.
What are follow-up buttons?
Follow-up buttons are pre-set buttons that let you quickly send frequently used follow-up questions or commands without typing them out each time. You can create these buttons the same way you create prompt templates and custom instructions. Once created, they appear at the end of a response message if the conversation contains the same tags you've associated with that button. This makes continuing conversations more efficient by providing relevant shortcuts based on the context.
What is the difference between Amplify Helpers, Prompt Templates, and Custom Instructions?
Custom Instructions are specific guidelines or commands provided by users to tailor the model's output to their specific needs. They are messages reinforced to the model throughout the conversation.
Prompt Templates are pre-written prompts with placeholders. They can have specific custom instructions attached to them, further tailoring the conversation.
Amplify Helpers are prompt templates and custom instructions that the Amplify development team deems “helpful” to Amplify users.
What are the usage limits for the models?
Amplify enforces an hourly rate limit to ensure fair usage and manage costs effectively. Users are restricted from exceeding a cost of $0.50 per hour. The average user is unlikely to exceed this cost; however, if your needs surpass this limit, you will soon be able to extend your usage by providing a valid Certificate of Authorization (COA) string.
What is the limit on how much data I can input into a conversation?
The amount of data you can input into a conversation with Amplify is subject to two main constraints: the character limit for text input and the maximum context window for each model. The character limit for text input is a fixed number that represents the maximum number of characters you can enter into the text box at one time. Currently, Amplify has set this limit to 24,000 characters for all user inputs regardless of the model.
In addition to the text input limit, each model has a "maximum context window," which is the total amount of text (from both the user and the model's responses) that the model can consider when generating a response. This includes the history of the conversation. The size of the maximum context window varies between models and determines how many tokens (words or word pieces) the model can look back on at any given time.
Here are the approximate maximum context window sizes for the models currently used by Amplify. Please remember that these are approximations and may vary depending on individual conversations, words, and/or characters.
- GPT-3.5 ≈ 32,000 characters
- GPT-4 Turbo ≈ 250,000 characters
- Claude 3 Sonnet ≈ 400,000 characters
- Claude 3 Haiku ≈ 400,000 characters
- Mistral 7B Instruct ≈ 64,000 characters
- Mistral 8x7B Instruct ≈ 64,000 characters
- Mistral Large ≈ 64,000 characters
Please note that the maximum context window is a separate constraint from the text input limit and is specific to each model's ability to process and remember conversation history.
What is a context window?
A context window in the realm of generative AI text tools refers to the amount of text the model can consider at any one time when predicting or generating the next word or sequence of text. It represents the "memory" of the model for the current task. When the model generates text, it looks at the words within this window to understand the context, maintain coherence, and ensure relevance to what has been said before.
What impact does the Response Length setting have on Amplify’s outputs and why would I want a longer or shorter response length?
The "Response Length" setting in Amplify dictates the verbosity of the model's outputs: a shorter response length yields concise and focused answers quickly, which is beneficial for straightforward questions, rapid insights, and cost reduction, as less computational resources are utilized. Conversely, a longer response length provides more comprehensive and detailed explanations, suitable for complex topics or when a deeper understanding is required. Users should adjust the response length accordingly to balance between succinctness, detail, and potential cost implications of their inquiries.
If I edit Amplify’s response to my prompt, will those edits impact my chat?
Yes. If you edit a prompt, then your prompt will be run again and all previous responses to your original prompt will be lost. If you edit one of the responses you receive, the model will consider the modified responses as what it originally said.
-
Others
How do I send feedback and/or ask questions about how to use the tool?
If you run into any issues, have questions about Amplify, or want to provide feedback, you are encouraged to email the team at amplify@vanderbilt.edu.