Deep learning models often experience significant performance degradation under domain shift, where test data originates from a distribution different from the training data. This paper introduces Spectral Geometric Regularization (SGR), a novel framework designed to learn domain-invariant representations by aligning the intrinsic geometries of source and target domains. Unlike prior methods that often rely on statistical moment matching, SGR operates by minimizing the spectral discrepancy between the eigenvalues of the graph Laplacians constructed from feature manifolds. Grounded in the theory of the Laplace-Beltrami operator, the proposed spectral loss function encourages isometry—a fundamental geometric equivalence—between domains. We provide theoretical guarantees for our framework, establishing the differentiability of the spectral loss and deriving a probabilistic bound on the target error that directly links spectral alignment to improved generalization. As an architecture-agnostic regularizer, SGR presents a principled and theoretically sound alternative to existing domain adaptation paradigms.