0% found this document useful (0 votes)
36 views3 pages

Mesanite Team Allocation

The document outlines the allocation of problem statements to interns at Mesanite, detailing their names, registration numbers, classes, and team guides. Two primary problem statements are presented: one focusing on automatic prompt refinement for Large Language Models to enhance user input effectiveness, and the other on deepfake detection and media authenticity verification to combat misinformation. Each project aims to develop prototypes that improve response quality and provide user-friendly explanations of results.

Uploaded by

jerolinmathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views3 pages

Mesanite Team Allocation

The document outlines the allocation of problem statements to interns at Mesanite, detailing their names, registration numbers, classes, and team guides. Two primary problem statements are presented: one focusing on automatic prompt refinement for Large Language Models to enhance user input effectiveness, and the other on deepfake detection and media authenticity verification to combat misinformation. Each project aims to develop prototypes that improve response quality and provide user-friendly explanations of results.

Uploaded by

jerolinmathew
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Mesanite Interns - Problem Statement Allocation

Sl Problem
Register No. Name Class Team Guide
No. Statement
1 Anindita Saha 2460328 3 BTCS - B
2 Neha Tresa Boby 2460411 3 BTCS - C Ganesh
3 Ankit Pai N 2462036 3 BTCSAIML - B Bharadwaj PS1
4 Somalaraju Sai Bhuvana 2362366 5 BTCS DS (Team 1)
5 R K Krithick 2361035 5 BTIT
6 Abel George 2460303 3 BTCS - A
7 Shamith M Gowda 2462530 3 BTCSAIML - C Meghashyam
8 Joel Jacob Roji 2462333 3 BTCS DS Vivek PS2
9 Karthik Srivathsa P L 2360399 5 BTCS - C (Team 2)
10 Varun John Paul 2362189 5 BTCS AIML - C
11 Sidharth P S 2460452 3 BTCS - A
12 Tania Robby 2461032 3BTCS - C Nithin
13 Freida B Rodrigues 2460367 3BTCS - C Premjith PS1
14 Suhaas K 2362372 5BTCS DS (Team 3)
15 Grace Ann Mathew 2360374 5BTCS - B
16 Aksa Mariya Basil 2462311 3BTCS DS
17 Kenan Pereira 2460392 3BTCS - A
Sam
18 Nevan Miranda 2462120 3BTCSAIML - A
ClintFord PS2
19 Riya Bajaj 2362360 5BTCS DS
(Team 4)
20 Rupal Sharma 2360453 5BTCS - C
21 Lisa Hazel D'Souza 2362343 5BTCS DS

Problem Statements are stated below:


Problem Statement 1: Automatic Prompt Refinement for Large Language
Models
Context

Prompt formulation is a critical determinant of the quality and reliability of responses generated
by Large Language Models (LLMs). While LLMs are designed to interpret natural language,
the effectiveness of their outputs is highly sensitive to the phrasing, specificity, and structure
of user input. In real-world usage, most individuals provide prompts in everyday natural
language, which are often ambiguous, incomplete, or imprecise. In contrast, structured prompts
(e.g., schema-driven or JSON-like formats) tend to yield more consistent and accurate
responses but are impractical for non-technical users.

Problem Definition

There exists a gap between human-centered interaction and the structured precision required
for optimal LLM performance. Users naturally prefer informal, unstructured prompts, yet such
prompts frequently lead to degraded or less reliable outputs. The central challenge is to develop
a system that can automatically transform unstructured natural language prompts into
optimized forms that preserve user intent while improving model response quality.

Project Goal
The goal of this project is to design and implement a prompt optimization system that:

❖ Accepts unstructured, everyday natural language inputs,


❖ Infers the user’s underlying intent,
❖ Reorganizes or reformulates the prompt into a more effective structure, and
❖ Produces higher-quality output across different LLMs and task types.

Expected Outcomes
A functional prototype that performs automatic prompt refinement. Demonstrated
improvements in response quality, reliability, and relevance across test cases. Analytical
insights into the effectiveness of different refinement strategies and their generalizability.
Problem Statement 2: Deepfake Detection and Media Authenticity
Verification
Context

The rapid advancement of generative AI has blurred the boundary between authentic and
synthetic media. High-quality deepfakes, AI-generated photos, videos, and other digital
artifacts pose significant challenges to online trust, enabling the spread of misinformation,
fraud, and identity manipulation. Current verification tools are either narrowly focused,
technically complex, or inaccessible to non-expert users, leaving the majority of the public
unable to reliably assess media authenticity.

Problem Definition

The fundamental challenge lies in developing a robust system that can automatically
distinguish authentic media from AI-generated content across diverse formats (images and
videos). Beyond classification, the system must also offer explainability and transparency,
since a mere binary label (“real” or “fake”) is insufficient for fostering user trust. Existing
solutions fall short by either lacking generalizability, focusing only on isolated cues, or
presenting results in ways that are not user-friendly.

Project Goal
The goal of this project is to design and prototype a media authenticity verification system that:

❖ Analyzes both images and videos for potential AI manipulation,


❖ Detects and highlights suspicious features or anomalies,
❖ Provides a confidence score quantifying the likelihood of manipulation, and
❖ Uses an integrated Large Language Model (LLM) to explain results in clear, natural
language accessible to non-technical users.

Expected Outcomes

A prototype capable of analyzing both images and videos for authenticity. A scoring mechanism
that indicates confidence in classification results. Transparent natural-language explanations
powered by an LLM. Comparative evaluation of different detection methods and insights into
their effectiveness.

You might also like