Relevant Components of Monad Bayes for Profiling
What constitutes a good benchmark
- Concrete components/transformers of interest:
These are plausible components for benchmarking the performance of individual concrete transformers.
-
SamplerIO
-- | An 'IO' based random sampler using the MWC-Random package. newtype SamplerIO a = SamplerIO (ReaderT GenIO IO a) deriving (Functor, Applicative, Monad, MonadIO)
-
SamplerST
-- | An 'ST' based random sampler using the MWC-Random package. newtype SamplerST a = SamplerST (forall s. ReaderT (GenST s) (ST s) a)
-
Enumerator
-- | An exact inference transformer that integrates -- discrete random variables by enumerating all execution paths. newtype Enumerator a = Enumerator (WriterT (Product (Log Double)) [] a) deriving (Functor, Applicative, Monad, Alternative, MonadPlus)
-
Weighted
-- | Execute the program using the prior distribution, while -- accumulating likelihood. (StateT is more efficient than WriterT). newtype Weighted m a = Weighted (StateT (Log Double) m a) deriving (Functor, Applicative, Monad, MonadIO, MonadTrans, MonadSample)
-
Population
-- | A collection of weighted samples, or particles. newtype Population m a = Population (Weighted (ListT m) a) deriving (Functor, Applicative, Monad, MonadIO, MonadSample, MonadCond, MonadInfer)
-
Sequential
-- | Represents a computation that can be suspended at certain points. newtype Sequential m a = Sequential {runSequential :: Coroutine (Await ()) m a} deriving (Functor, Applicative, Monad, MonadTrans, MonadIO)
-
Traced
-- | A tracing monad where only a subset of random choices are traced. data Traced m a = Traced { model :: Weighted (FreeSampler m) a, traceDist :: m (Trace a) }
- Algorithms of interest:
- PMMH
-- | Particle Marginal Metropolis-Hastings sampling. pmmh :: MonadInfer m => -- | number of Metropolis-Hastings steps Int -> -- | number of time steps Int -> -- | number of particles Int -> -- | model parameters prior Traced m b -> -- | model (b -> Sequential (Population m) a) -> m [[(a, Log Double)]] pmmh t k n param model = mh t (param >>= runPopulation . pushEvidence . Pop.hoist lift . smcSystematic k n . model)
This is a good example of benchmarking for whole inference transformer systems. It uses
Traced
,Sequential
, andPopulation
which consists of theWeighted
andListT
transformers.
Thoughts
- What constitutes the difference in efficiency (if any) between the concrete library implementation and the “from-scratch” implementations found in the paper?