Nonsmooth stochastic optimization has emerged as a fundamental framework for modeling complex machine learning problems, particularly those involving constraints. Proximal stochastic gradient descent (proximal SGD) is the predominant algorithm used to solve such problems. While most existing work focuses on the i.i.d. data setting, nonsmooth optimization under Markovian sampling remains largely unexplored. In this work, we propose an online statistical inference procedure for nonsmooth optimization under Markovian sampling using proximal SGD. We establish asymptotic normality of the averaged proximal SGD iterates and introduce a random scaling strategy that constructs parameter-free pivotal statistics through appropriate normalization. This approach enables asymptotically valid and fully online confidence intervals. Numerical experiments support the theory and demonstrate practical effectiveness.