LangChain, Prompt Engineering, Ollama
AI
How it Works:
The user pastes their training code and enters GPU/CPU details in the Streamlit front-end.
When the user clicks “Get eco-tips,” a HTTP POST request is sent to the FastAPI server, with a JSON body containing the code and hardware info.
FastAPI server back-end receives the request and invokes LangChain’s OllamaLLM wrapper.
LangChain formats the prompt template and sends it over HTTP to the local Ollama daemon running Mistral Small Instruct.
Ollama returns the generated text which is three energy-saving tips.
FastAPI packages that text into a JSON object and sends it back as the response to the original POST.
Streamlit front-end receives the JSON, extracts the “recommendations” field, and renders it live in the browser displaying the result.
Project Link: https://github.com/alina-ahmed-tech/sdg_hackathon