Let me add a few more stray thoughts.
1) It is common in college-level science courses to use "significant digits" to indicate an approximate level of precision. A number like "11" implies a precision of something like +/- 1 or +/- 0.5, while "11.0" would imply something like +/- 0.1 or +/- 0.05.
I have seen professors who seem to think "significant digits" is the last word in error analysis, and expect students to exactly follow the "significant digit" rules for all homework. In fact, it is simply a rule of thumb that works pretty well in many circumstances, but it is quite possible to find situations where the rules work quite poorly.
2) In the absence of any clarification, I would follow Jim's interpretation that +/- 0.1 means that +/- 0.10001 is out of spec, while +/- 0.09999 is in spec. Of course, the best plan is to keep all the parts well away from the limits, so there is no question.
3) Economically, it may not be worth arguing about the parts right near the spec. You could easily spend more money determining whether the part is actually +0.09999 or +0.10001 than the part is worth. It also depends on whose responsibilty it is: the producer to assure that it is in spec or the customer to show that it is out of spec.
This is where the idea of guardbanding comes in. A conscientious producer might reject any parts at +0.10 because he isn't sure they are good. For internal purposes, the test criterion might be tightened to +/- 0.09. A conscientious customer might accept any parts at +0.10 because he isn't sure they are bad. For internal purposes, the test criterion might be loosened to +/- 0.11.
4) Perhaps Taguchi had it right. Rather than an absolute limit for good/bad, a sliding scale might be more appropriate.
At one time, I was trying to figure out an "economic capabilty index". The loss function for bad parts is specificed, and then the "economic capabilty index" is simply the cost of poor quality associated with the parts. If the parts are exactly at the target, the index is 0. The farther off the parts are from ideal, the higher the average loss function. If the average cost of poor quality exceeds some value, then there is a penalty to the producer.
Determining the appropriate loss function would take a bit of thought (but that's what QE's are for) and calculating the results is somewhat involved (but that's what computers are for), but interpreting the results -money! - is simple (I guess that's what managers are for

).
Tim F