数据科学中的先进理论与方法研讨会

2026.03.13

召集人:王建军(西南大学,数学学院,教授)、王治国(四川大学,数学学院,副教授)、赵熙乐(电子科技大学,数学科学学院,教授)

时间:2026.04.12—2026.04.18


Workshop on Advanced Theories and Methods in Data Science

General Schedule


Monday, April 13, 2026

Time

Title

Speaker

Affiliation

Chair

07:30 -09:00

Breakfast

09:00 -
09:20

Opening Ceremony and Group Photo

09:20 -
10:00

Approximation and Generalization of Deep Neural Networks

Feilong Cao

Zhejiang Normal University

Jianjun Wang

10:00 -
10:40

Positive Definiteness of Matrix Hadamard Products and Its Applications in Radar Signal Processing

Zai Yang

Xi’an Jiaotong University

10:40 -
10:50

Coffee Break

10:50 -
11:30

Personalized Federated Learning for Data Heterogeneity: ADMM-based Model Decoupling and Partial Personalization

Jinshan Zeng

Xi’an Jiaotong University

Haiyang Li

11:30 -
12:10

Exactly or Approximately Wasserstein Distributionally Robust Estimation According to Wasserstein Radii Being Small or Large

Enbin Song

Sichuan University

12:10 -
13:00

Lunch

14:30 -
15:10

Tensor Orthogonal Subspace Split: Theory and Applications

Jifei Miao

Yunnan University

Chao Wang

15:10 -
15:50

A General Embedded Modelling Framework for Data Fusion

Liangjian Deng

University of Electronic Science and Technology of China

15:50-16:20

Coffee Break

16:20-17:00

Tensor Multi-view Clustering: Model Construction and Algorithm Design

Zhi Wang

Southwest University

Liangjian Deng

17:00-17:40

Low-Rank Tensor Recovery from Quantized Measurements

Jingyao Hou

China West Normal University

17:40 -
19:00

Dinner

 


Tuesday, April 14, 2026

Time

Title

Speaker

Affiliation

Chair

07:30 -
09:00

Breakfast

09:00 -
09:40

Fourier Phase Retrieval via Self-learned Regularization

Hongxia Wang

National University of Defense Technology

Qiyu Jin

09:40 -
10:20

How many integrals should be evaluated at least in two-dimensional hyperinterpolation?

Maolin Che

Guizhou University

10:20 -
10:50

Coffee Break

10:50 -
11:30

From Explicit Regularization to Implicit Priors: Image Reconstruction Algorithms Based on Nonlocality and Low-Rankness

Qiyu Jin

Lanzhou University

Hongxia Wang

11:30 -
12:10

Beyond the Grid: Bridging Classical Approximation Theory and Implicit Neural Representations

Chao Wang

Southern University of Science and Technology

12:10 -
13:00

Lunch

14:30 -
17:30

Thematic Discussion

17:30 -
19:00

Dinner



Wednesday, April 15, 2026

Time

Title

Speaker

Affiliation

Chair

07:30 -
09:00

Breakfast

09:00 -
09:40

Two-Stage Data Synthesization: A Statistics-Driven Restricted Trade-off between Privacy and Prediction

Shaobo Lin

Xi’an Jiaotong University

Xile Zhao

10:00 -
10:40

The Power of Small Initialization in Noisy Low-Tubal-Rank Tensor Recovery

Yao Wang

Xi’an Jiaotong University

10:40 -
10:50

Coffee Break

10:50 -
11:30

Adaptive Dimension Reduction for Overlapping Group Sparsity

Jingwei Liang

Shanghai Jiao Tong University

Jian Lu

11:30 -
12:10

A Gradient Guided Diffusion Framework for Chance Constrained Programming

Zhiguo Wang

Sichuan University

12:10 -
13:00

Lunch

14:30 -
15:10

Structure-Self-Evolving Tensor Decomposition and Its Applications in High-Dimensional Data Processing

Yubang Zheng

Southwest Jiaotong University

Dunbiao Niu

15:10 -
15:50

Deep Learning-Driven Image Restoration and Numerical Weather Prediction Data Assimilation

Ji Li

Capital Normal University

15:50-16:20

Coffee Break

16:20-17:00

Symmetry Priors and Deep Learning

Qi Xie

Xi’an Jiaotong University

Yuping Duan

17:00-17:40

A Dual Inexact Nonsmooth Newton Method for Distributed Optimization

Dunbiao Niu

Tongji University

17:40 -
19:00

Dinner

 

 

Thursday, April 16, 2026

Time

Title

Speaker

Affiliation

Chair

07:30 -
09:00

Breakfast

09:00 -
09:40

Reconstruction of High-Dimensional Biological Systems under Incomplete Observations

Xiaofei Zhang

Central China Normal University

Zhiguo Wang

09:40 -
10:20

Starter-Iterator Neural Operator for Solving Forward and Inverse Problems

Yuping Duan

Beijing Normal University

10:20 -
10:50

Coffee Break

10:50 -
11:30

Data-driven deformation correction in X-ray spectro-tomography with implicit neural networks

Ting Wang

Southern University of Science and Technology

Yubang Zheng

11:30 -
12:10

Spectral Compressed Sensing via Low-Rank Structured Matrix Optimization

Xunmeng Wu

The Hong Kong University of Science and Technology

12:10 -
13:00

Lunch

14:30 -
17:30

Thematic Discussion

17:30 -
19:00

Dinner

 

 

Friday, April 17, 2026

Time

Title

Speaker

Affiliation

Chair

07:30 -
09:00

Breakfast

09:00-
12:00

Thematic Discussion

12:10 -
13:00

Lunch

14:30 -
17:30

Departure

 

 

Talk Titles and Abstracts (in Schedule Order)

Monday

Morning

1. Feilong Cao (Zhejiang Normal University)

Title: Approximation and Generalization of Deep Neural Networks

Abstract: This talk first discusses the fundamental issues of deep learning, namely approximation capability, algorithmic effectiveness, and generalization. It then introduces constructive approaches to deep network approximation and explains their significance. Finally, it presents several new results on the generalization of deep networks in connection with algorithms such as Dropout and SAM.

2. Zai Yang (Xi’an Jiaotong University)

Title: Positive Definiteness of Matrix Hadamard Products and Its Applications in Radar Signal Processing

Abstract: According to the classical Schur product theorem, the Hadamard (entrywise) product of two positive (semi-)definite matrices is also positive (semi-)definite. But can the Hadamard product of two singular positive semidefinite matrices be positive definite? And what is its role in radar signal processing? This talk will address these questions and briefly discuss how mathematics advances engineering research and how engineering, in turn, inspires mathematical research.

3. Jinshan Zeng (Xi’an Jiaotong University)

Title: Personalized Federated Learning for Data Heterogeneity: ADMM-based Model Decoupling and Partial Personalization

Abstract: Personalized Federated Learning (PFL) often faces a performance trade-off between the global model and local personalized models when dealing with data heterogeneity. To tackle this challenge, this talk delves into two typical forms of data heterogeneity: statistical heterogeneity and modality heterogeneity. Accordingly, we propose two novel algorithmic frameworks based on the Alternating Direction Method of Multipliers (ADMM), incorporating tailored model decoupling and partial personalization strategies, respectively, and establish their convergence theories. For statistical heterogeneity, we first leverage the Moreau Envelope to achieve full-model personalization, and then introduce proxy models of the global model on local clients to decouple the global and personalized models, thereby harmonizing global consensus with local specificity. For modality heterogeneity, we design a partial model personalization algorithm that decomposes the model into globally shared and locally personalized layers, thus effectively mitigating the challenges posed by modality inconsistency. Experimental results validate that, compared to conventional federated learning and mainstream personalization methods, the two proposed frameworks achieve a superior global-local performance trade-off in their respective scenarios, yielding marked improvements in the models' performance and robustness.

4. Enbin Song (Sichuan University)

Title: Exactly or Approximately Wasserstein Distributionally Robust Estimation According to Wasserstein Radii Being Small or Large

Abstract: This talk primarily considers the robust estimation problem under Wasserstein distance constraints on the parameter and noise distributions in the linear measurement model with additive noise, which can be formulated as an infinite-dimensional nonconvex minimax problem. We prove that the existence of a saddle point for this problem is equivalent to that for a finite-dimensional minimax problem, and give a counterexample demonstrating that the saddle point may not exist. Motivated by this observation, we present a verifiable necessary and sufficient condition whose parameters can be derived from a convex problem and its dual. Additionally, we also introduce a simplified sufficient condition, which intuitively indicates that when the Wasserstein radii are small enough, the saddle point always exists. In the absence of the saddle point, we solve an finitedimensional nonconvex minimax problem, obtained by restricting the estimator to be linear. Its optimal value establishes an upper bound on the robust estimation problem, while its optimal solution yields a robust linear estimator. Numerical experiments are also provided to validate our theoretical results.

 

Afternoon

5. Jifei Miao (Yunnan University)

Title: Tensor Orthogonal Subspace Split: Theory and Applications

Abstract: Tensor representations have emerged as a fundamental paradigm for modeling multidimensional data by preserving intrinsic correlations across multiple modes. This work proposes a novel theoretical framework, termed Tensor Orthogonal Subspace Split (TOSS), which generalizes classical orthogonal projections to the tensor domain through a consistent fiber-wise subspace split along a prescribed mode. We first present the general formulation of TOSS and systematically investigate its fundamental properties. As an important and practically meaningful special case, we further introduce the rank-one TOSS, which imposes a separable rank-one structure along the splitting mode and admits a clear geometric interpretation. This formulation naturally captures dominant coherent patterns while effectively isolating orthogonal residual variations. The proposed framework establishes a unified theoretical foundation for tensor-domain orthogonal split and opens new avenues for structured tensor modeling across diverse applications. Building upon the developed TOSS theory, we present several illustrative examples to demonstrate its generality. We then focus on hyperspectral image restoration and color video background separation as two representative tasks, for which corresponding optimization models are formulated. Efficient ADMM-based algorithms are developed to solve the resulting problems. Extensive experimental results validate the effectiveness and superiority of the proposed approaches.

6. Liangjian Deng (University of Electronic Science and Technology of China)

Title: A General Embedded Modelling Framework for Data Fusion

Abstract: This talk primarily explores how to embed deep learning priors and traditional variational optimization model into a general modelling framework, which can effectively enhance the accuracy, generalizability, and interpretability of current intelligence methods. The talk mainly covers two aspects: 1) introducing the general embedded modelling framework, which bridges traditional variational optimization models and deep learning models; 2) giving some examples of the general embedded modelling framework, which are successfully applied to some representative data fusion tasks, also analyzing the relationship between these techniques and current mainstream deep learning approaches.

7. Zhi Wang (Southwest University)

Title: Tensor Multi-view Clustering: Model Construction and Algorithm Design

Abstract: By integrating multi-domain features, multi-view subspace clustering has become a powerful tool for high-dimensional data analysis. However, existing methods still face major challenges in effectively exploiting high-order complementary information across views and in accurately characterizing the intrinsic geometric structure of complex data, such as ambiguous points in overlapping subspaces. This talk will present our recent work. First, we introduce a unified clustering framework based on the truncated tensor nuclear norm. The model employs a nonconvex truncated tensor nuclear norm to capture global low-rank structure more accurately and improve robustness to outliers. More importantly, this work is the first to incorporate hyper-Laplacian regularization and spectral embedding into a joint optimization framework, breaking the limitations of traditional multi-stage processing and enabling mutual reinforcement between local geometric manifolds and clustering objectives. Second, we present a structure-enhanced tensor multi-view clustering method. To better distinguish boundary data points, this work proposes a nonconvex tensor Schatten-2/3 quasi-norm, which adaptively weights and encodes high-order complementary information more effectively. Combined with a self-enhancing structural penalty based on affinity propagation, it enables refined separation of complex local geometric manifolds and block-diagonal representation. On the optimization and theoretical side, both works develop efficient solvers for challenging nonconvex problems, with closed-form solutions for all subproblems, and provide rigorous mathematical proofs guaranteeing convergence to KKT stationary points. Extensive experiments show that, by jointly leveraging nonconvex tensor low-rank approximation and fine-grained geometric structural constraints, the proposed models achieve performance surpassing existing state-of-the-art methods across a variety of benchmark datasets.

8. Jingyao Hou (China West Normal University)

Title: Low-Rank Tensor Recovery from Quantized Measurements

Abstract: This study addresses the problem of tensor recovery from quantized measurements, aiming to reconstruct low-rank tensors based on quantized linear inner-product measurements. This problem has applications in compressed representation and efficient transmission of high-order tensors. Existing methods rely on computing the product between the vectorized tensor and a large random Gaussian measurement matrix, which leads to storage and transmission challenges. To overcome this, we introduce a multi-stage modewise measurement strategy, using smaller measurement matrices to mitigate these issues. We propose an iterative projected back-projection recovery algorithm within the higher-order singular value decomposition (HOSVD) framework to match the quantized multi-stage measurements. By leveraging the tensor limited projection distortion (LPD) property and the restricted isometry property (RIP), we establish conditions on both the sampling operator and quantizer for low-rank tensor reconstruction. Experiments on synthetic and real-world data demonstrate the effectiveness of our theory and the superiority of our algorithm.

 

Tuesday

Morning

9. Hongxia Wang (National University of Defense Technology)

Title: Fourier Phase Retrieval via Self-learned Regularization

Abstract: Fourier phase retrieval (FPR) is an ill-posed problem widely applied in various imaging scenarios. Existing methods based on pre-trained priors can alleviate the ill-posedness of the FPR problem; however, they rely heavily on pre-training datasets and may introduce spurious artifacts during the inversion process. In this talk, we introduce a series of FPR methods based on self-supervised regularization. By integrating the implicit regularization of untrained neural networks with advanced optimization algorithms, these proposed methods achieve high-quality signal reconstruction from limited measurement data, without the need for any external data or pre-training procedures. Moreover, the convergence of the proposed algorithms is theoretically analyzed.

10. Maolin Che (Guizhou University)

Title: How many integrals should be evaluated at least in two-dimensional hyperinterpolation?

Abstract: This paper introduces a novel approach to approximating continuous functions over high-dimensional hypercubes by integrating matrix CUR decomposition with hyperinterpolation techniques. Traditional Fourier-based hyperinterpolation methods suffer from the curse of dimensionality, as the number of coefficients grows exponentially with the dimension. To address this challenge, we propose two efficient strategies for constructing low-rank matrix CUR decompositions of the coefficient matrix, significantly reducing computational complexity while preserving accuracy.  The first method employs structured index selection to form a compressed representation of the matrix, while the second utilizes adaptive sampling to further optimize storage and computation. Theoretical error bounds are derived for both approaches, ensuring rigorous control over approximation quality. Additionally, practical algorithms---including randomized and adaptive decomposition techniques---are developed to efficiently compute the CUR decomposition. Numerical experiments demonstrate the effectiveness of our methods in drastically reducing the number of required coefficients without compromising precision. Our results bridge matrix/tensor decomposition and function approximation, offering a scalable solution for high-dimensional problems. This work advances the field of numerical analysis by providing a computationally efficient framework for hyperinterpolation, with potential applications in scientific computing, machine learning, and data-driven modeling.

11. Qiyu Jin (Lanzhou University)

Title: From Explicit Regularization to Implicit Priors: Image Reconstruction Algorithms Based on Nonlocality and Low-Rankness

Abstract: Images are vital information carriers in modern society, yet they often suffer from blur, noise, distortion, and missing data due to limitations in acquisition and transmission, degrading quality and affecting downstream applications. This work investigates image reconstruction from the perspectives of nonlocality and low-rankness. First, a normalized weighted nonlocal Laplacian is proposed for sparse aerial height reconstruction, with adaptive weights derived automatically. A curvature-guided weighted nonlocal total variation model is further developed, theoretically proven to converge, and shown to improve PSNR and robustness. Second, two quaternion-based nonconvex low-rank mixed norms are introduced, with closed-form solutions and ADMM-based algorithms. They are applied to inverse problems, matrix completion, and RPCA, achieving superior accuracy and efficiency. Finally, nonlocality and low-rankness are unified through an implicit low-rank quaternion deep matrix factorization model with graph regularization, solved via self-supervised learning. Experiments demonstrate clear advantages over explicit low-rank methods.

12. Chao Wang (Southern University of Science and Technology)

Title: Beyond the Grid: Bridging Classical Approximation Theory and Implicit Neural Representations

Abstract: Implicit Neural Representations (INRs) have revolutionized how we represent signals, shifting from discrete pixel grids to continuous function approximators. However, the "black-box" nature of Multi-Layer Perceptrons (MLPs) often leads to spectral bias and inefficient parameter scaling. In this talk, we trace the transition of INRs from simple coordinate-to-value mappings to mathematically grounded frameworks. We examine how classical tools, such as Fourier-Chebyshev features and Tensor Ring decompositions, can be integrated into the INR pipeline to mitigate spectral bias and provide structural priors. By viewing INRs through the lens of white-box encoding and multilinear algebra, we demonstrate how these models outperform traditional methods in inverse problems and complex scientific imaging, such as TXM-XANES 3D chemical mapping and hyperspectral image fusion.

Wednesday

Morning

13. Shaobo Lin (Xi’an Jiaotong University)

Title: Two-Stage Data Synthesization: A Statistics-Driven Restricted Trade-off between Privacy and Prediction

Abstract: We propose a two-stage synthesis strategy. In the first stage, we introduce a synthesis-then-hybrid strategy, which involves a synthesis operation to generate pure synthetic data, followed by a hybrid operation that fuses the synthetic data with the original data. In the second stage, we present a kernel ridge regression (KRR)-based synthesis strategy, where a KRR model is first trained on the original data and then used to generate synthetic outputs based on the synthetic inputs produced in the first stage. By leveraging the theoretical strengths of KRR and the covariant distribution retention achieved in the first stage, our proposed two-stage synthesis strategy enables a statistics-driven restricted privacy–prediction trade-off and  guarantee optimal prediction performance. We validate our approach and demonstrate its characteristics of being statistics-driven and restricted in achieving the privacy–prediction trade-off both theoretically and numerically. Additionally, we showcase its generalizability through applications to a marketing problem and five real-world datasets.

14. Yao Wang (Xi’an Jiaotong University)

Title: The Power of Small Initialization in Noisy Low-Tubal-Rank Tensor Recovery

Abstract: We study the problem of recovering a low-tubal-rank tensor ξ\X_\starξ from noisy linear measurements under the t-product framework. A widely adopted strategy involves factorizing the optimization variable ξ\X \in \mathbb{Rˆ{n \times n \times kξ as ξ\U * \Uˆ\topξ, where ξ\U \in \mathbb{Rˆ{n \times R \times kξ, followed by applying factorized gradient descent (FGD) to solve the resulting optimization problem. Since the tubal-rank ξrξ of the underlying tensor ξ\X_\starξ is typically unknown, this method often assumes ξr R \le nξ, a regime known as over-parameterization. However, when the measurements are corrupted by some dense noise (e.g., sub-Gaussian noise), FGD with the commonly used spectral initialization yields a recovery error that grows linearly with the over-estimated tubal-rank ξRξ. To address this issue, we show that using a small initialization enables FGD to achieve a much smaller recovery error, even when the tubal-rank ξRξ is significantly overestimated. Using a four-stage analytic framework, we analyze this phenomenon and establish the sharpest known error bound to date, which is independent of the overestimated tubal-rank ξRξ. Furthermore, we provide a theoretical guarantee showing that an easy-to-use early stopping strategy can achieve such best known result in practice. All these theoretical findings are validated through a series of simulations and real-data experiments.

15. Jingwei Liang (Shanghai Jiao Tong University)

Title: Adaptive Dimension Reduction for Overlapping Group Sparsity

Abstract: Typical dimension reduction techniques for sparse optimization involve screening strategies based on a dual certificate derived from the first-order optimality condition, approximating the gradients or exploiting some inherent low dimensional structure that an optimization algorithm promotes. Screening rules for overlapping group lasso are generally less developed because the subgradient structure is more complex and the link between sparsity pattern and the dual vector is generally indirect. In this talk, I will present a new strategy for certifying the support of the overlapping group lasso and demonstrate how this can be applied significantly accelerate the performance of numerical methods.

16. Zhiguo Wang (Sichuan University)

Title: A Gradient Guided Diffusion Framework for Chance Constrained Programming

Abstract: Chance constrained programming (CCP) is a powerful framework for addressing optimization problems under uncertainty. In this paper, we introduce a novel Gradient-Guided Diffusion-based Optimization framework, termed GGDOpt, which tackles CCP through three key innovations. First, GGDOpt accommodates a broad class of CCP problems without requiring the knowledge of the exact distribution of uncertainty—relying solely on a set of samples. Second, to address the nonconvexity of the chance constraints, it reformulates the CCP as a sampling problem over the product of two distributions: an unknown data distribution supported on a nonconvex set and a Boltzmann distribution defined by the objective function, which fully leverages both first- and second-order gradient information. Third, GGDOpt has theoretical convergence guarantees and provides practical error bounds under mild assumptions. By progressively injecting noise during the forward diffusion process to convexify the nonconvex feasible region, GGDOpt enables guided reverse sampling to generate asymptotically optimal solutions. Experimental results on synthetic datasets and a waveform design task in wireless communications demonstrate that GGDOpt outperforms existing methods in both solution quality and stability with nearly 80% overhead reduction

Afternoon

17. Yubang Zheng (Southwest Jiaotong University)

Title: Structure-Self-Evolving Tensor Decomposition and Its Applications in High-Dimensional Data Processing

Abstract: Tensor decomposition aims to represent large-scale high-order tensors as a collection of interrelated low-dimensional factor vectors, matrices, or tensors, and has attracted wide attention in recent years in scientific computing, machine learning, and computer vision. This talk first reviews the basic operations and representation frameworks of tensor decomposition, and then systematically introduces typical decomposition forms with fixed structures, including CP decomposition, Tucker decomposition, tensor singular value decomposition, Tensor Train decomposition, Tensor Ring decomposition, and Fully-Connected Tensor Network decomposition. The talk then focuses on how to design structure-self-evolving tensor decomposition methods that can adaptively determine the decomposition topology and rank parameters according to the intrinsic structure of the data. Building on this, and in conjunction with high-dimensional data processing tasks such as cloud removal in multitemporal remote sensing images, traffic data imputation, and deep network compression, we analyze the representation power, computational efficiency, and reconstruction accuracy of different tensor decomposition models, providing useful references for efficient modeling and intelligent processing of high-dimensional data.

18. Ji Li (Capital Normal University)

Title: Deep Learning-Driven Image Restoration and Numerical Weather Prediction Data Assimilation

Abstract: Image restoration and four-dimensional variational data assimilation (4D-Var) in numerical weather prediction are both key problems in scientific computing. Although both are solved by constructing a variational optimization objective and minimizing it, the physical mechanisms and mathematical purposes of their regularization terms differ substantially. For image restoration, the data-measurement constraint is usually underdetermined, and the regularization term is introduced to compensate for insufficient measurements and overcome ill-posedness. Since image priors are crucial to reconstruction quality, the use of deep generative models as priors has become a major research direction in recent years. In contrast, the data term in 4D-Var is typically well-posed, and its core objective is to use observations to correct the initial state of the model so as to suppress the accumulation of forecast errors. In operational systems, incremental 4D-Var ultimately reduces to solving a large-scale symmetric positive definite linear system. Given the computational bottleneck caused by dimensions that can reach tens of millions, accelerating this computation with deep learning has significant practical value. This talk will report the speaker’s latest research results on applying deep learning methods to these two frontier directions.

19. Qi Xie (Xi’an Jiaotong University)

Title: Symmetry Priors and Deep Learning

Abstract: Using image processing as an example, this talk explores the importance of geometric symmetry priors in the design of deep networks. It focuses on the construction methods and fundamental theory of new network building blocks, including high-precision rotation/scale-equivariant convolutions, rotation-equivariant implicit neural representations, rotation-equivariant Vision Transformers (ViTs), and transform-learnable equivariant convolutions. Furthermore, through applications such as medical and natural image processing, image reconstruction, and multi-frame image matching, the talk will show that embedding geometric symmetry priors can significantly improve model performance and generalization ability.

20. Dunbiao Niu (Tongji University)

Title: A Dual Inexact Nonsmooth Newton Method for Distributed Optimization

Abstract: In this presentation, we introduce a novel dual inexact nonsmooth Newton (DINN) method for solving a distributed optimization problem, which aims to minimize a sum of cost functions located among agents by communicating only with their neighboring agents over a network. Our method is based on the Lagrange dual of an appropriately formulated primal problem created by introducing local variables for each agent and enforcing a consensus constraint among these variables. Due to the decomposed structure of the dual problem, the DINN method guarantees a superlinear (or even quadratic) convergence rate for both the primal and dual iteration sequences, achieving the same convergence rate as its centralized counterpart. Furthermore, by exploiting the special structure of the dual generalized Hessian, we design a distributed iterative method based on Nesterov's acceleration technique to approximate the dual Newton direction with suitable precision. Moreover, in contrast to existing secondorder methods, the DINN method relaxes the requirement for the objective function to be twice continuously differentiable by using the linear Newton approximation of its gradient. This expands the potential applications of distributed Newton methods. Numerical experiments demonstrate that the DINN method outperforms the current state-of-the-art distributed optimization methods.

Thursday

Morning

21. Xiaofei Zhang (Central China Normal University)

Title: Reconstruction of High-Dimensional Biological Systems under Incomplete Observations

Abstract: Advances in single-cell omics and spatial omics have created new opportunities for understanding high-dimensional biological systems. However, missing measurements, mixed observations, and noise interference pose major challenges to the accurate recovery of cellular states and spatial structures. This talk focuses on the reconstruction of high-dimensional biological systems under incomplete observations and presents our recent progress in missing-expression recovery, spatial signal completion, cross-omics inference, and the analysis of mixed observations. Particular attention will be given to structured modeling, the integration of spatial constraints, and recoverability boundary analysis, along with applications of these methods to tumor microenvironment analysis.

22. Yuping Duan (Beijing Normal University)

Title: Starter-Iterator Neural Operator for Solving Forward and Inverse Problems

Abstract: Operator learning is an interdisciplinary field that combines machine learning with scientific computing to construct efficient mappings between input and output function spaces, thereby reducing the computational complexity of solving high-dimensional partial differential equations. We introduce a novel neural operator framework, the Starter-Iterator Neural Operator (SINO), which reimagines the initialization strategies and iteration formats of traditional methods using neural networks. This leads to a new paradigm for spectral-spatiotemporal collaborative modeling: frequency domain learning captures globally stable low-frequency features, while time domain learning refines local solution residuals, overcoming the limitations of traditional single-domain approaches. Extensive experiments on dynamic systems, including the Navier-Stokes and acoustic wave equations, demonstrate that SINO delivers exceptional numerical accuracy, generalization capability, and robustness.

 

23. Ting Wang (Southern University of Science and Technology)

Title: Data-driven deformation correction in X-ray spectro-tomography with implicit neural networks

Abstract: Full-field transmission X-ray microscopy with X-ray absorption near-edge structure spectroscopy enables non-destructive, high-resolution, chemically specific three-dimensional morphological and compositional analyses. However, spectro-tomographic acquisitions often suffer from image deformations and misalignments caused by mechanical instabilities and hardware limitations, which can substantially degrade the quality of tomographic reconstruction and downstream analyses. This critical bottleneck hinders the broader application of X-ray spectro-tomography in addressing complex scientific problems across various disciplines. To address this, we introduce CANet, a self-supervised coordinate-based neural network that implicitly models deformation fields to efficiently and accurately correct misalignment. Unlike traditional methods, CANet requires no external training data and learns a continuous mapping from projection spectral or angular coordinates to affine transformations, enabling unified registration across both tomographic and spectral dimensions. Demonstrated on X-ray spectro-tomographic datasets of battery cathode particles, CANet achieves robust alignment and restores high-fidelity structural and chemical contrast, thereby facilitating the capability to resolve nanoscale degradation mechanisms.

24. Xunmeng Wu (The Hong Kong University of Science and Technology)

Title: Spectral Compressed Sensing via Low-Rank Structured Matrix Optimization

Abstract: Spectral compressed sensing aims to recover spectrally sparse signals from limited measurements. Unlike traditional sparse signals, spectrally sparse signals are a superposition of a few complex exponentials, with their frequency parameters taking values in a continuous domain. A key idea is to transform spectral sparsity into the low-rank property of structured matrices, thereby converting the challenging spectral sparse recovery problem into a tractable low-rank matrix optimization problem. In this talk, we will introduce two classical models: the low-rank Toeplitz model and the low-rank Hankel model. We will further discuss the recently proposed low-rank Hankel–Toeplitz model, highlighting its advantages in capturing signal structures and enabling efficient computation, along with related optimization algorithms and theoretical recovery guarantees.