25 July, 2025 | Contest Starts and Registration Begins |
15 August, 2025 | Registration Ends |
28 August, 2025 (11:59 PM IST) |
Final Solution Submission Deadline |
29 August, 2025 - 5 September, 2025 |
Evaluation of Submissions |
8 September, 2025 | Finalists (Top-5 teams) are Notified |
18 September, 2025 | Grand Finale - Finalists (Top 5 teams) present their AI Challenge solution to the Grand Jury in-person at Samsung EnnovateX 2025 |
Pick 1 of the following problem statements.
A single, powerful multimodal foundation model can serve as an unchangeable firmware within edge/mobile operating system, enabling applications to use compact "adapters" (for varied downstream tasks – text, image, audio, video) instead of bundling several large models. Some of the architectural innovations that can be included are - firmware backbone and task-specific adapters, multi-path execution to route tasks efficiently based on complexity, demonstrating system benefits through metrics like latency and battery performance.
Reimagine a smartphone that doesn't just run apps, but truly understands and assists user. An agent that sees what you see, hears what you hear, and remembers your experiences to provide contextual, real-time help, all without a constant connection to the cloud.
Multi-agent system that runs fully on-device, continuously learning and modeling user behaviour patterns to detect anomalies or potential fraud in real-time, without sending sensitive data to external servers. The system can monitor user behaviour patterns (e.g., touch patterns, typing rhythm, app usage, movement) and build local models of “normal” behaviour. It should detect and react to anomalous or suspicious activity (e.g., unauthorized access, bot-like behaviour, spoofing).
An agentic system that intelligently optimizes battery usage for a target application, operating fully on-device, with no reliance on cloud computation. Adaptive, modular, and context-aware, ensuring the target application continues to function effectively while maximizing battery life.
Efficient framework for the on-device fine-tuning of Billion+ scale Large Language Models on a Galaxy S23-S25 equivalent smartphone/edge device. Enable a typical application to adapt a pre-trained LLM to a user's personal data, all while operating within the tight constraints of a mobile environment.
Let's go beyond model and traditional interaction capabilities on any edge device. A solution that addresses a real-world problem by leveraging on-device Generative AI, while pioneering novel, effective, and intuitive Human-AI Interaction (H-AI).
Develop an AI-based solution using monostatic integrated sensing and communication (ISAC) to estimate UAV range, velocity, and direction of arrival, leveraging advanced signal processing and machine learning. Utilize the channel model based on 3GPP TR 36.901-j00 (Rel-19) Section 7.9 for ISAC applications. Participants are expected to design models that extract these parameters from ISAC signals under the specified channel conditions.
Applications are affected differently under varying traffic conditions, channel states, and coverage scenarios. If the traffic of each UE can be categorized into broader categories, such as Video Streaming, Audio Calls, Video Calls, Gaming, Video Uploads, browsing, texting etc. that can enable the Network to serve a differentiated and curated QoS for each type of traffic. Develop an AI model to analyze a traffic pattern and predict the application category with high accuracy.
SNS applications (such as Facebook and YouTube), transmit both video (short videos, reels, etc.) and non-video traffic (feeds, suggestions, etc.) through the same data pipeline. Develop an AI model to differentiate reel / video traffic versus non-reel/video traffic in real-time, enabling user equipment (UE) to optimize performance dynamically. The model should also ensure accuracy under varying network congestion and coverage conditions.
Kindly go through all the event details, rules & guidelines carefully. Not abiding by the rules may lead to disqualification from the contest.
AI challenge will be conducted in 2 phases:
Participants may use:
Participants are allowed to publish(open source) any synthetic or proprietary dataset used in their project, but will be responsible for any legal compliance and permission for the same. The dataset can be published under Creative Commons, Open Data Commons, or equivalent license.
Participants must not use:
The evaluation criteria for Phase 1 are provided below:
The evaluation criteria for Phase 2 are provided below:
For any query or support, please feel free to reach out to us at ennovatex.io@samsung.com.
Need more help? Reach out to the organizing team at ennovatex.io@samsung.com.