BLUE
FCflaviucipcigan.bsky.social

I really like what TMLR is doing in this space. Rolling submission, journal to conference track at ICML and their own infinite conference.

Graphs serve as generic tools to encode the underlying relational structure of data. Often this graph is not given, and so the task of inferring it from nodal observations becomes important. Traditional approaches formulate a convex inverse problem with a smoothness promoting objective and rely on iterative methods to obtain a solution. In supervised settings where graph labels are available, one can unroll and truncate these iterations into a deep network that is trained end-to-end. Such a network is parameter efficient and inherits inductive bias from the optimization formulation, an appealing aspect for data constrained settings in, e.g., medicine, finance, and the natural sciences. But typically such settings care equally about \textit{uncertainty} over edge predictions, not just point estimates. Here we introduce novel iterations with \textit{independently interpretable parameters}, i.e., parameters whose values - independent of other parameters' settings - proportionally influence characteristics of the estimated graph, such as edge sparsity. After unrolling these iterations, prior knowledge over such graph characteristics shape \textit{prior distributions} over these independently interpretable network parameters to yield a Bayesian neural network (BNN) capable of graph structure learning (GSL) from smooth signal observations. Fast execution and parameter efficiency allow for high-fidelity posterior approximation via Markov Chain Monte Carlo (MCMC) and thus uncertainty quantification on edge predictions. Informative priors unlock modeling tools from Bayesian statistics like prior predictive checks. Synthetic and real data experiments corroborate this model's ability to provide well-calibrated estimates of uncertainty, in test cases that include unveiling economic sector modular structure from S$\&$P$500$ data and recovering pairwise digit similarities from MNIST images. Overall, this framework enables GSL in modest-scale applications where uncertainty on the data structure is paramount.

1
DSdiskshima.bsky.social

Read the ICML 2024 best paper on stealing model weights. Really smart approach of using SVDs to steal the LLM's hidden dimension count and last layer weight matrices! ICML: icml.cc/virtual/2024...arxiv.org/abs/2403.06634

0
HShannes-stark.bsky.social

Come discuss an ICML best paper award with the author Rob Brekelmans in our reading group on Monday! "Probabilistic Inference in LMs via Twisted SMC" arxiv.org/abs/2404.17546portal.valencelabs.com/logg

0
CScscheid.net

This might be "simply" because peer review is structurally broken (or I've outdated views! I haven't done this in years), but my perspective from reviewers in vis, eurovis, icml, neurips, kdd and facct was "they work with the venue's best interests at heart, not mine"

1
FCflaviucipcigan.bsky.social

Reminds me of two papers that caught my eye at ICML. 🧵 1/ In the Platonic Representation Hypothesis, the authors postulate that neural networks trained with different objectives and architectures on different data and modalities converge to the same statistical model in representation space.

1
MJmm-jj-nn.bsky.social

The conference rotation seems reasonably sustainable to me? If the reviewing system doesn't entirely collapse, I don't see the cadence of NeurIPS, AAAI, IJCAI, ICML, ICLR, EMNLP etc. going away, with or without social media.

1
NTthuereygroup.bsky.social

ICML'24 in Vienna was great, thanks to all organizers! Here you can enjoy all our workshop submissions in detail: ge.in.tum.de/2024/08/02/i... Among others: Flow matching for inverse problems with physics constraints, higher-order differentiable Navier-Stokes solvers, Equivariance in GNNs ...

0