Valkenburg et al look at the way inhomogeneities in the universe introduce apparent uncertainty in dark energy measurements, if one assumes a homogeneous world model when interpreting distance measurements. They also point out that cosmic variance will lead to bias in w(a). Modeling the structure in the universe is important! My question: is weak lensing immune from these worries, since it involves treatment of all the inhomogeneities?
Hezaveh et al simulate ALMA cycle 1 (ie 0.16" resolution) observations of Herschel/SPT sub mm lensed galaxies, and claim that the magnification is high enough, and the sources likely to be complex enough, to enable the detection of at least one DM subhalo of mass 10^8 or greater *in every system*
Purcell and Zentner on why the "Too Big To Fail" problem might be a storm in a teacup: "We propose that the large variation in subhalo populations among different host halos can explain the dearth of large, dense subhalos orbiting the Milky Way without any making any adjustments to the host halo mass or accounting for baryonic feedback processes."
Deason et al have an improved measurement of density and velocities of blue horizontal branch stars in the MW halo: here they fit dynamical models an infer a halo of about 1e12 Msun (as before) and quite a high concentration (cvir~20). Since concentration reflects age, this is consistent with the picture in which the MW has been undisturbed for a long time.
Authors note a new phenomenon: linear portions of quasar lightcurves have gradients that appear vary systematically with quasar redshift (higher z quasars have steeper gradients). Interesting if real: it could sharpen up QSO redshift estimation in cadenced imaging surveys beyond what you can do with the QSO colours.
This set of three papers (the link is to the first one, by Nikhil Padmanabhan) describes a factor of two improvement in the SDSS DR7 BAO distance estimate, just by improving the data analysis. Basically, non-linear gravitational collapse causes the usual BAO feature in the galaxy correlation function to appear smoothed out: it can be partially sharpened back up by using the Zel'dovich approximation to reconstruct the density field given the redshift and position data. The result is an increase in cosmological parameter accuracy roughly equivalent to surveying 3-4 times more sky. Software is vital!
Epic paper by Sluse et al, on high precision astrometry in lensed quasar systems. Attention optical astronomers! They deconvolve their images! And get very small error bars as a result. Interesting claim about being able to quantify the lens environment (the dreaded "external covergence"). This is the biggest systematic error in H0/w determination - great if we can reduce it further through improved lens modelling.
Boylan-Kolchin et al identify a new problem with CDM at sub-galactic scales: the Aquarius simulated MW galaxy halos have subhalos that are about 5 times more massive than the actual dwarf satellites we see. Are we underestimating the MW satellites' masses somehow? Or is their something wrong with the simulations? Or both? Anyway, as Phil B said: add it to the list of things to investigate about CDM :-)
Interesting: Strigari & Wechsler prefer to state the problem as the sims predicting 25-75 times as many subhalos at the Fornax mass scale as are observed in the MW system - and in the paper you posted they look at thousands of MW analogs in the SDSS survey and find that the MW is not atypical. This strengthens MBK's conclusion, that there is a problem with CDM - although note that S&W put the emphasis on galaxy formation not being well understood at this mass scale. They imagine that there really are all those dark Fornaxes out there! Pretty cool - now, if we could just see them somehow...
CosmoMC gets upgraded by the Cavendish inference team (Hobson, Feroz, and now Graff) - they approximate complex likelihood functions with neural networks, which are then much much faster to evaluate. Could be a real time-saver
Kelly and Kirshner look at the SDSS images and spectra of the host galaxies of more than 500 nearby supernovae. Seems like an interesting complement to the SNLS host studies, I'd be interested to hear from Mark where this one fits. They can resolve the galaxies very well, so can make statements like "the SN Ic-BL and SN IIb explode in exceptionally blue locations" :-)
This looks like it might be interesting - new optical spectra and Spitzer IR data for 4 galaxies at z=2 show that the dust in these systems is rather different than in the local universe. The high magnification provided gravitational lenses arranged in front of them helped a lot!
Suyu et al combine strong gravitational lensing and stellar kinematics data for a spiral galaxy to measure the mass of both the disk and the dark matter halo. The constraints are very strong - they find an oblate, flattened halo, and get a disk stellar mass with small uncertainty (0.1dex); when they compare this with the stellar mass from the disk colours and K-band magnitude they find that stellar population models with Chabrier IMF work, and Salpeter does not - the opposite to the case of massive elliptical galaxies.
Andy Lawrence suggests that the unaccounted for UV flux in AGN spectra could be coming from "clouds" of gas orbiting the black hole at ~10 Schwarzschild radii, and reflecting light as UV line emission which all gets blurred together to make a broad continuum due to the insane speeds involved.
I saw Andy yesterday at the Edinburgh sims workshop - he agreed that his clouds were probably shreds, but all that really matters is the filling factor and ionisation state, and the code he used to play around with it all is called "cloudy"... I told him we found the model pretty plausible (stuff comes off accretion disks and stays in orbit, absorbing and emitting, fine), his hope is that someone makes a more detailed model, checks his results and does some inference. That'll be Lance then...
I've not read the whole thing (it's 44 pages!) but Yu Lu is, IMO, doing the Right Thing in this field - he takes a Semi Analytic Model of galaxy formation, and *actually fits* all the parameters to the data (in this case, the observed K-band luminosity function). Some parameters are well-constrained (implying we may have learnt something about galaxies), while others show strong degeneracies, indicating what new physics needs to be included. Seems like the models in most of the parameter volume fail to predict some other datasets, giving more clues on how to improve the model. The key thing is that by doing the inference properly, Lu has elevated SAM study to a quantified learning process.
There's been quite a bit of discussion about this issue this summer, it came up at the Bologna dark matter meeting as well as the Aosta strong lensing workshop. Basically, in strong lens systems we infer *more* subhalos/satellites than CDM predicts for massive lens galaxies (the satellites cause "millilensing", where the quasar/radio source image fluxes are affected by the small amounts of additional magnification and demagnification). One suggested resolution to this problem is to include all the subhalos along the line of sight - Metcalf (2008) claimed this was the answer, and now Dan Dan Xu has tested this claim using the Millenium Simulation, and various assumptions for the satellite subhalo density profile.
"An expanding fluid leaves its equilibrium state; the energy density decreases and the pressure also decreases. In the absence of bulk viscosity, the fluid relaxes instantaneously and pressure and density are related by the equation of state. Bulk viscosity dampens this behavior by introducing a finite relaxation timescale, hence producing a shift between the equation of state pressure and the true pressure. We note that for a large enough ζ, the effective pressure becomes negative and could mimic a dark energy behavior."
In particular, authors Gagnon & Lesgourges predict effective neutrino number around 3.04, w0 around -0.9, and wa around 0.1. Something to shoot for!