Skip to main content

Home/ Oxford astro-ph Coffee/ Group items tagged cosmological parameters

Rss Feed Group items tagged

Phil Bull

On the measurement of cosmological parameters - 3 views

  •  
    A recent-historical analysis of cosmological parameter estimation. "Of the 28 measurements of Omega_Lambda in our sample published since 2003, only 2 are more than 1 sigma from the WMAP results. Wider use of blind analyses in cosmology could help to avoid this."
  •  
    Their detection of confirmation bias (aka unconscious experimenter bias, or groupthink) may not be so significant: most if not all of those Omega_Lambda measurements will have used WMAP CMB priors. Next step would be to try and correct for that. Their warning for future analyses is spot on though: parameter estimation needs to be done blind.
Phil Bull

CosmoHammer: Cosmological parameter estimation with the MCMC Hammer - 1 views

  •  
    Modern MCMC method for cosmological parameter estimation. "While Metropolis-Hastings is constrained by overheads, CosmoHammer is able to accelerate the sampling process from a wall time of 30 hours on a single machine to 16 minutes by the efficient use of 2048 cores. Such short wall times for complex data sets opens possibilities for extensive model testing and control of systematics."
Celia Escamilla

The Pseudo-Rip - 2 views

  •  
    This is a dramatic illustration of the fact that any amount of observational data, necessarily restricted to the past lightcone and necessarily with non-zero errors, cannot predict anything mathematically about the future even one hour hence without further assumptions. It is also a display of the difference between mathematics and physics: the physicist necessarily employs intuition about the real world.
  •  
    I'm confused by the final statement that we can't predict anything without further assumptions. It's a function of the complexity of the model, surely. If we have a simple model so that all the relevant constants are fixed by the observations, then they uniquely predict the future. However, if your model is complicated and has parameters unconstrained by experiment, then you can choose them to give "ripping" cosmologies within the hour. This is why we like to choose models that make sense (which is our intuition, as they say, for example to not have silly things like phantom fields, and why we choose to work within rigorous frameworks and seek to embed or motivate models within them), and also why we do model comparison. It may be possible to have some rip within the hour, but we can quantify how unlikely that is given the data, or how unlikely it is within the landscape. The statement they give is very deep sounding, but I think it has very little content.
Tessa Baker

[1207.3804] Examining the evidence for dynamical dark energy - 0 views

  •  
    Hints of something interesting or data artefact? Also: http://arxiv.org/pdf/1207.4781v1.pdf
Phil Marshall

A 2% Distance to z=0.35 by Reconstructing Baryon Acoustic Oscillations - 1 views

  •  
    This set of three papers (the link is to the first one, by Nikhil Padmanabhan) describes a factor of two improvement in the SDSS DR7 BAO distance estimate, just by improving the data analysis. Basically, non-linear gravitational collapse causes the usual BAO feature in the galaxy correlation function to appear smoothed out: it can be partially sharpened back up by using the Zel'dovich approximation to reconstruct the density field given the redshift and position data. The result is an increase in cosmological parameter accuracy roughly equivalent to surveying 3-4 times more sky. Software is vital!
  •  
    It is interesting to see, in the third paper, how the constraints on H_0 \Omega_m space are robust to different scenarios of curvature and dark energy, and compatible with direct measurements of H_0. But also, as expected, this is effected by what one assumes about the neutrinos.
1 - 6 of 6
Showing 20 items per page