If you combine two liquids in a laboratory environment, they may have a little less volume than they did when they were separate. This impacts the amount of air in the vessel where you are measuring the weight of the liquids with a scale, causing the buoyant force from the increased amount of air to be present in the vessel that is being weighed. This, in turn, slightly reduces the weight of the combined liquids on a scale relative to merely adding their weights because the buoyant force of the air impacts weight from gravity on the scale (although not the actual mass of the liquids which stays the same in the absence of a nuclear reaction).
This is similar to pulling very lightly on a string hanging from the ceiling while you are weighing yourself on a bathroom scale.
It turns out that the effect is tiny. It is on the order of 0.001% for equal masses of liquid whose combined volume is reduced by 1% through mixture, which would be typically of what you might see in a laboratory. But, it is an effect in everyday physics that I wasn't aware of until reading the linked material today, despite having had a fair amount of college physics and physical chemistry and having read a great deal about since since then. So, I figured it deserved a post of its own to mention it.
For many applications these kind of fine details don't matter. The rule of thumb that I was taught in physics in college was that three significant digits is usually fine for any practical application of physics. But, in a high precision experiment, such as measurements of electromagnetic constants like muon g-2, you need to take into account factors of this scale (and other factors like the gravitational effect of tidal forces and like impacts of slight changes in temperature on substance density) to get an accurate answer, and a tiny "unknown unknown" that you have failed to account for could result in systemic error that won't show up in your margin of error bars.
This is one reason that a "five sigma" result that is considered a "discovery" of something in physics can't always be taken at face value, especially when the precision of the measurement is very high. Because this is based on a margin of error that may omit some overlooked factor. An overlooked factor is often much more probable than the probability of a statistical fluke necessary to produce a five sigma result. So, in any high precision scientific measurement, there is effectively a maximum threshold of precision based upon the likelihood that some factor is not accounted for in the experiment and its interpretation.
This is why replication of results and having more than one independent laboratory make measurements even at the same facility (as was done at Tevatron and the LHC) is necessary to make the significance of the results obtained by the group more robust and trustworthy.
But, even multiple instances of performing the same experiment can miss these factors if everyone has the same training and makes identical omissions of potential error factors due to group think.
Post a Comment