Primary competition visual

Telco Troubleshooting Agentic Challenge

€40 000 EUR
18 days left
Agentic AI
Fine-tuning
Large Language Models
886 joined
176 active
Starti
Apr 17, 26
Closei
May 18, 26
Reveali
May 29, 26
Some clarifications related to the competition
18 Apr 2026, 19:47 · 4

Hi ,

First of all thank you for hosting this competion it is an interesting challange .

I am focusing on Track B so all questions related to that

My assumption that Phase 1 score doesn't really count as this is aimed for understanding the enviroment.

Phase1 question:

1. Will the phase one gold truth be released prior to phase 2?

Phase 2 question :

2. I assume in Phase 2 the we cannot work with locally hosted enviroemtn and have to use the cloud server can you confirm ?

Phase 3 questions :

1, What it expected infernece speed in phase 3 (or what is the exact GPU will be used )?

2, Is memory across questions permissable ?

3, Is topology discovery prior questions allowed ?

4. Are there any resource limits — container CPU/RAM/GPU in Phase 3 grading?

5. LORA adapters expected in bf16 , or other quantization can be used as well ?

6. Can quantized lower precision version be used in phase 3 ( can a merged model be provided ) ?

Thank you very much in advance

L

Discussion 4 answers

Hi,

Phase 1 question: 1. Yes. We plan to release the golden ground truth of Phase 1 when Phase 1 is finished. Phase 2 question: 1. Yes. Phase 2 only allow api call from cloud service Phase 3 question: 1. We prefer to not share this information. Everyone will have access to the same resources. 2. No. Each question has an independent context. 3. Yes. We do not interfere with the way agents solve problems. 4. We prefer to not share this information. Everyone will have access to the same resources. 5. Any parameter precision is acceptable. 6. Any parameter precision is acceptable.

20 Apr 2026, 12:17
Upvotes 1

Hi Antonio, Thank you for the response !

User avatar
Brainiac

Hi @AntonioDeDomenico, thanks for the earlier clarifications. A few follow-ups specifically on Phase 3 execution, to help us package submissions correctly:

  1. Is the Phase 3 GPU NVIDIA (CUDA) or Huawei Ascend (CANN)? This directly changes what we can ship — CUDA-compiled artifacts (AWQ kernels, Flash-Attention, custom Triton) won't run on Ascend, and vice versa. Even just confirming the ecosystem (without disclosing the exact chip) would be enormously helpful.
  2. Will the you serve the base model yourself (e.g., via vLLM, SGLang, MindIE) and we just submit LoRA weights + main.py, or do we submit a full runnable environment (Dockerfile, requirements.txt, our own serving code)?
  3. If we submit code + weights only: what Python, PyTorch, and CUDA/CANN versions will be installed in the execution environment? This lets us pin requirements.txt to match.
  4. Is the Agent Tool Server in Phase 3 the same HF-Spaces-CPU deployment as Phase 2, or hosted on dedicated infrastructure during grading? This affects our tool-call latency budget planning.

I understand you may not want to disclose exact GPU model or VRAM. Even partial answers on the ecosystem (CUDA vs CANN) and the submission contract (weights-only vs full-container) are what we need most. Thanks!

1. CUDA 2. Please submit the full runnable environment with one-click button flow and make sure the environment can run successfully. The GPU deployment code and the Agent execution code need to be separated, because one is deployed on the GPU server and the other on the CPU server. 3. Please refer to question 2 4. Please do not worry about tool-call latency. The latency will be short enough not to affect the overall runtime.