JaGuard: Position Error Correction of GNSS Jamming with Deep Temporal Graphs
Deep temporal graph method for correcting GNSS positioning errors from jamming attacks.
Deep temporal graph method for correcting GNSS positioning errors from jamming attacks.
Graph neural network architecture for modeling spatio-temporal signals with dynamic structure.
Federated learning framework for training deep models on resource-constrained edge devices.
Method for identifying counterfactuals from observational data using optimal transport theory.
Novel sampling algorithm for masked diffusion models improving generation quality and efficiency.
Deep learning model for channel state information prediction in MIMO systems with robustness testing.
Optimizes on-device semantic selection with cross-encoder rerankers for retrieval, agent memory, and recommendations via monolithic forwarding.
Scalable framework for automated desktop UI exploration to generate training data for LLM-based GUI understanding and automation.
Parameter-free clustering framework using self-supervised consensus maximization without requiring hyperparameter tuning.
Pathlet dictionary learning approach for robust and interpretable trajectory generation in privacy-preserving urban mobility applications.
Studies model inversion attacks on latent diffusion models, showing non-uniform memorization patterns in latent space.
Theoretical analysis of feature learning mechanisms and implicit bias in deep networks, proposing analytical scaling arguments.
Presents Arc Gradient Descent optimizer with phase-aware step dynamics, evaluated on Rosenbrock function and ML datasets.
Proposes differentially private federated learning optimization using regularized Fisher information matrix for faster convergence under privacy constraints.
Uses Chernoff information to characterize trade-offs between fairness, privacy, and accuracy in machine learning systems.
Theoretical analysis of input-connected MLPs with direct connections from input to hidden neurons and universal approximation properties.
Introduces homomorphism error metric to measure representational inconsistencies and predict compositional generalization failures in transformers.
Analyzes forecast uncertainty in ML model explainability, arguing uncertainty at decision boundaries explains LIME/SHAP instability.
Brain-inspired routing method with temporal-ensemble experts for general continual learning from non-stationary data streams.
Proposes one-to-one channel-head binding method for imputing missing values in multivariate time series data.
Studies robustness of PPO reinforcement learning under sensor drift using temporal sequence models to handle partial observability.
arXiv paper MJ1: multimodal judge trained with RL enforcing visual grounding through structured verification chains and counterfactual consistency rewards.
arXiv paper proposing WiFi CSI sensing framework handling station-wise feature missingness and limited labeled data in multi-station deployments.
arXiv paper SAFE-PIT-CM: autoencoder with frozen Euler solver for recovering material diffusion coefficients from continuum mechanics data.
arXiv paper proposing key deletion approach for machine unlearning designed at model development stage rather than post-hoc, addressing privacy regulations and data errors.
arXiv paper PRISM: empirical study of mid-training design choices across 7 LLM base models showing consistent +15 to +40 point gains from 27B token sequences.
arXiv paper systematically analyzing Elastic Weight Consolidation for continual learning, revealing suboptimal importance estimation and proposing improvements.
arXiv paper proposing DBML SA framework using dynamic Bayesian machine learning to evaluate operator situation awareness in nuclear control environments.
arXiv paper presenting MemReward, graph-based experience memory framework reducing human labeling needs for LLM reward prediction in RL post-training.
arXiv paper analyzing fixed-point iterations for nuclear norm optimization in private machine learning, proved with Gemini 3 collaboration.
arXiv paper presenting FIPO reinforcement learning algorithm for improving reasoning in LLMs through fine-grained credit assignment beyond outcome-based rewards.
arXiv paper proposing Memory-Keyed Attention (MKA) to reduce KV cache memory costs in long-context LLM inference without sacrificing representation quality.
Algorithm for optimal multi-task dataset mixture selection in LLM supervised fine-tuning. Addresses heterogeneous learning dynamics and overfitting.
Uncertainty quantification methods for distribution-to-distribution generative models in scientific imaging. Ensures trustworthy cell and medical image generation.
Reinforcement learning post-training for virtual cell models to enforce biological constraints. Improves generative model reliability for drug discovery.
Gaussian Graphical Models with simultaneous clustering and graph inference for high-dimensional data. Dimensionality reduction approach.
Hyperdimensional binary encoding method for molecular structures in drug discovery. Replaces expensive biophysical calculations.
Bilevel optimization algorithm using first-order methods for nonconvex-strongly-convex problems. Theoretical optimization analysis.
Quantum machine learning circuits using SU(2) equivariance and spin networks. Geometric constraints for variational quantum algorithms.
Weighted Support Vector Machines for classification and probability estimation. Classic machine learning method with applications.
Dynamic scheduling system for efficient large model training across GPU clusters. Addresses training efficiency and resource utilization.
Web crawling method using neural networks to efficiently find parallel bilingual documents. Targets document discovery for translation.
Network packet queuing optimization technique for low-latency applications. Infrastructure networking approach.
Self-trained fine-tuning paradigm for LLMs on table understanding tasks like NL-to-Code and data cleaning. Reduces need for expensive human labeling.
Continual learning technique combining parameter-efficient fine-tuning with vision transformers to prevent catastrophic forgetting. Addresses sequential task adaptation.
Method for training LLMs to explain their own internal activations using natural language probes. Advances LLM interpretability research.
Transformer attention mechanism compressed to run in under 2MB memory for IoT and wearable devices. Enables NLP deployment on ultra-constrained hardware.
Graph filtering method for multimodal recommendation systems without training overhead. Addresses computational efficiency in recommender systems.
Study redefining non-IID data heterogeneity in federated learning by migrating from label to embedding-level task-specific distributions.
Learning dynamically-inspired bases for Koopman and transfer operator approximation in complex nonlinear dynamical systems.