3D Gaussian Splatting (3DGS) SLAM is widely used for high-fidelity mapping in spatial intelligence. However, current methods often rely on single-representation constraints, limiting their performance in large-scale dynamic outdoor scenes and leading to cumulative pose errors and scale ambiguity. To address these challenges, we propose LVD-GS, a novel LiDAR-Visual 3D Gaussian Splatting SLAM system for dynamic scenes. Inspired by the human coarse-to-fine comprehension process, we propose a hierarchical representations collaboration module that facilitates mutual reinforcement to optimize mapping, effectively mitigating scale drift and enhancing reconstruction robustness. Furthermore, we propose a joint dynamic modeling module that generates fine-grained dynamic masks by fusing open-world detection with implicit residual constraints, guided by uncertainty estimates from DINO-Depth features. Extensive evaluations on KITTI, nuScenes, and self-collected datasets demonstrate that our approach achieves state-of-the-art performance compared to existing methods.