Skip to content

[Bug] double free or corruption (fasttop) #4266

@warlockedward

Description

@warlockedward

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

TM][WARNING] [ProcessInferRequests] [1] total sequence length (136 + 130936) exceeds session_len (131072), max_new_tokens is truncated to 130935

[TM][INFO] [SeqMgr][match] ID 1, hit blocks 0, cache_len 0

[TM][INFO] [SeqMgr][match] ID 1, after matching, blocks 0, cache_len 0

[TM][INFO] [Forward] [0, 1), dc=0, pf=1, sum_q=136, sum_k=136, max_q=136, max_k=136

[TM][INFO] [SeqMgr][CachePrompt] ID 1, cached blocks 2, tokens 136

NCCL version 2.27.3+cuda12.9

[22:25:09] /lmdeploy/build/temp.linux-x86_64-cpython-311__turbomind/_deps/xgrammar-src/cpp/grammar_matcher.cc:422: Warning: The matcher has terminated after accepting the stop token, but is trying to accept new token with id 151645.

[22:25:09] /lmdeploy/build/temp.linux-x86_64-cpython-311__turbomind/_deps/xgrammar-src/cpp/grammar_matcher.cc:422: Warning: The matcher has terminated after accepting the stop token, but is trying to accept new token with id 151645.

[22:25:09] /lmdeploy/build/temp.linux-x86_64-cpython-311__turbomind/_deps/xgrammar-src/cpp/grammar_matcher.cc:422: Warning: The matcher has terminated after accepting the stop token, but is trying to accept new token with id 151645.

[TM][INFO] [Interrupt] slot 0, request 1, stop 0

[TM][INFO] [SeqMgr][CacheGeneration] ID 1, cached blocks 2, tokens 141

2026-01-10 22:25:09,303 - lmdeploy - INFO - turbomind.py:819 - [async_stream_infer] session 1 done

2026-01-10 22:25:09,303 - lmdeploy - INFO - async_engine.py:977 - session 1 finished, reason "stop", input_tokens 136, output_tokens 4

INFO: 192.254.90.4:47940 - "POST /v1/chat/completions HTTP/1.1" 200 OK

[2026-01-10 22:25:17 DP0] Avg prompt throughput: 13.6 tokens/s, Avg generation throughput: 0.5 tokens/s, Finished: 1 reqs, Unfinished: 0 reqs, Running: 0 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.1%, Prefix cache hit rate: 0.0%

2026-01-10 22:26:21,530 - lmdeploy - INFO - logger.py:45 - session=2, adapter_name=None, input_tokens=136, gen_config=GenerationConfig(n=1, max_new_tokens=None, do_sample=True, top_p=1.0, top_k=40, min_p=0.0, temperature=0.7, repetition_penalty=1.0, ignore_eos=False, random_seed=2515599849837740922, stop_words=None, bad_words=None, stop_token_ids=[151643, 151645], bad_token_ids=None, min_new_tokens=None, skip_special_tokens=True, spaces_between_special_tokens=True, logprobs=None, response_format={'type': 'json_object', 'json_schema': None, 'regex_schema': None}, logits_processors=None, output_logits=None, output_last_hidden_state=None, include_stop_str_in_output=False, with_cache=False, preserve_cache=False, migration_request=None, return_routed_experts=False), prompt='<|im_start|>system\nYou are a helpful assistant that responds in JSON format.

Reproduction

lmdeploy serve api_server /model/models/Qwen3-32B-FP8 --model-format fp8 --tp 4 --model-name Qwen3 --dtype float16 --enable-prefix-caching --cache-max-entry-count 0.7 --log-level INFO --backend turbomind --tool-call-parser qwen3 --chat-template qwen3 --reasoning-parser qwen-qwq

Environment

/model/anaconda3/envs/lmdeploy/lib/python3.11/site-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.
  import pynvml  # type: ignore[import]
sys.platform: linux
Python: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB
CUDA_HOME: /usr/local/cuda-12.9
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
PyTorch: 2.8.0+cu128
PyTorch compiling details: PyTorch built with:
  - GCC 13.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.7.1 (Git Hash 8d263e693366ef8db40acc569cc7d8edf644556d)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.8
  - NVCC architecture flags: -gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90;-gencode;arch=compute_100,code=sm_100;-gencode;arch=compute_120,code=sm_120
  - CuDNN 91.0.2  (built against CUDA 12.9)
    - Built with CuDNN 90.8
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=a1cb3cc05d46d198467bebbb6e8fba50a325d4e7, CUDA_VERSION=12.8, CUDNN_VERSION=9.8.0, CXX_COMPILER=/opt/rh/gcc-toolset-13/root/usr/bin/c++, CXX_FLAGS= -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.8.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, USE_XCCL=OFF, USE_XPU=OFF, 

TorchVision: 0.23.0+cu128
LMDeploy: 0.11.1+
transformers: 4.57.1
fastapi: 0.120.1
pydantic: 2.12.3
triton: 3.4.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV1     NV2     NV1     SYS     SYS     SYS     NV2     NODE    NODE    SYS     SYS     0-23,48-71      0               N/A
GPU1    NV1      X      NV1     NV2     SYS     SYS     NV2     SYS     NODE    NODE    SYS     SYS     0-23,48-71      0               N/A
GPU2    NV2     NV1      X      NV2     SYS     NV1     SYS     SYS     PIX     PIX     SYS     SYS     0-23,48-71      0               N/A
GPU3    NV1     NV2     NV2      X      NV1     SYS     SYS     SYS     PIX     PIX     SYS     SYS     0-23,48-71      0               N/A
GPU4    SYS     SYS     SYS     NV1      X      NV2     NV2     NV1     SYS     SYS     PIX     PIX     24-47,72-95     1               N/A
GPU5    SYS     SYS     NV1     SYS     NV2      X      NV1     NV2     SYS     SYS     PIX     PIX     24-47,72-95     1               N/A
GPU6    SYS     NV2     SYS     SYS     NV2     NV1      X      NV1     SYS     SYS     NODE    NODE    24-47,72-95     1               N/A
GPU7    NV2     SYS     SYS     SYS     NV1     NV2     NV1      X      SYS     SYS     NODE    NODE    24-47,72-95     1               N/A
NIC0    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS
NIC1    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS
NIC2    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    SYS     SYS      X      PIX
NIC3    SYS     SYS     SYS     SYS     PIX     PIX     NODE    NODE    SYS     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3

Error traceback

The Root Cause: Memory Conflict
The crash happens immediately after these specific events:

Context Overflow: Your total sequence length (136 + 130936) exceeds the session_len (131072).

Grammar Constraints: You are using response_format={'type': 'json_object'}.

The Trigger: The logs show [TM][INFO] Set grammar for model_request. When the engine tries to truncate max_new_tokens to fit the session length while simultaneously initializing the xgrammar (XGrammar) matcher for JSON enforcement, it likely triggers a race condition or an invalid pointer operation in the TurboMind C++ core.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions