The only weighted movements I do is squat, clean and press and lat pull down (working the muscles you typically use for "explosive" movements.) Everything else is medicine ball, pull ups, push ups, dips, ab work and planches. For the hands its my small POS bouldering wall, COC grippers and this open hand grip thing I welded together. Too much weight training used to bring my weight up quickly, plus my gym is dominated by early 20's, orange skinned, douchebags that typically sweat vodka and call each other BRA.
Results. There was no significant difference between the mean climbing levels of the three weight training groups. See Table 1.
Hmm... if you can believe the data (that's a big if) then it seems as though there is one trend: The ratio of people who either don't weight train or only do it for injury prevention, compared to the other group, gets significantly higher in the mid grades (5.11 and .12) and then lower again at the higher grades.
Or, put another way, the weakest and strongest climbers are most likely to be power training, while the intermediate (5.11 and 5.12) climbers are less likely to do so.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
Ignoring the 5.9 respondents (since there were only two) here's the relationship between lifting for power and climbing grade:
Code
Grade No Yes Ratio of N/Y ------------------------- 10 18 9 2.0 11 21 8 2.6 12 20 5 4.0 13 7 5 1.4
That certainly looks like a curve. I'd be happy to learn why you consider it "no evidence for any relationship".
On a side note, you seem to like to discourage both thoughtful and thoughtless responses - you just discourage the thoughtless ones with more glee. But you paint all as worthless. This seems counterproductive at best, and a little rude at worst.
The last few comments from Costa, Kriso9tails, and crack lover all seem to me to have a common theme, this being the importance of individual perception.
I submit that if the crack lovers comment had been limited to Salt Lake City after 1997 his statement would have read:
"Those sending 5.11 - 5.14 have discovered that to send those grades requires no more power than they get from their climbing or other training."
On the other hand if the comment had referred to the Gunks in 1955 he might have written that "5.7 is the limit of human ability and to attempt anything more difficult is physically unlikely and more importantly far too dangerous to merit further consideration regardless of the strength of the climber."
Its been my experience that the perceived necessity or utility of weight lifting depends upon the climbing sub culture in the region where one lives. Two factors strike me as being most important. First, the degree to which climbers see climbing success as an expression of upper body strength. and second, the grade at which climbers believe that climbing gets "hard".
If one wants to discuss weight lifting in the context of climbing, its most likely not a good idea to do so in terms of performance level.
With all due respect, I have no idea how you got that from my post. My post was an attempt to explain the data. Did you see the data in the poll? I'm not just making up numbers. Of course you would get different conclusions if the data was different. Unless you're biased from the outset, that's how this stuff works!
Granted, the dataset itself is probably completely wrong, since there's nothing to keep it from being bombed by people claiming to send 5.13 who actually don't. So my conclusions are only as good as the data-set (as I stated). It's just a thought experiment.
Results. There was no significant difference between the mean climbing levels of the three weight training groups. See Table 1.
Hmm... if you can believe the data (that's a big if) then it seems as though there is one trend: The ratio of people who either don't weight train or only do it for injury prevention, compared to the other group, gets significantly higher in the mid grades (5.11 and .12) and then lower again at the higher grades.
Or, put another way, the weakest and strongest climbers are most likely to be power training, while the intermediate (5.11 and 5.12) climbers are less likely to do so.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
In reply to:
Ignoring the 5.9 respondents (since there were only two) here's the relationship between lifting for power and climbing grade:
Code
Grade No Yes Ratio of N/Y ------------------------- 10 18 9 2.0 11 21 8 2.6 12 20 5 4.0 13 7 5 1.4
That certainly looks like a curve. I'd be happy to learn why you consider it "no evidence for any relationship".
The human mind sees patterns where only randomness exists. That's one reason we have statistics. The p-value for there being any relationship between climbing grade and lifting habits in the data is 0.51.* In a rough sense, if we repeated the survey with another group of climbers, there is a 50–50 chance that the curve you think you see wouldn't be there, or would even be inverted. You are looking at a random ink blot and seeing a butterfly.
In reply to:
On a side note, you seem to like to discourage both thoughtful and thoughtless responses - you just discourage the thoughtless ones with more glee. But you paint all as worthless. This seems counterproductive at best, and a little rude at worst.
This is the entirety of my comment: "You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training."
That is a simple statement of fact. There is nothing rude about it. On the contrary, I'm saving you from wasting your time. Keep in mind that there is essentially a 50–50 chance that the data could have come up some other way, and that you'd now be racking your brain to explain that relationship.
Jay
*Using the Cochran-Mantel-Haenszel chi-squared test.
(This post was edited by jt512 on Nov 17, 2009, 5:26 PM)
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
If I follow you correctly, you're saying that the granularity of the data is so large that the effect I'm seeing is too small to be certainly a result of anything more than random fluctuations. In other words, it may be real, or it may not be. I'm not sure how you determine how much randomness a given sample will exhibit. But anyway, not having the statistical background, I'll just take your word for it.
I'm not sure how you determine how much randomness a given sample will exhibit.
One word: assumption.
Nothing in statistics works if you can't make an assumption (valid or not) about the underlying data distribution. As a frequentist, you can state the data biases toward lifting and climbing harder, and you'd be absolutely correct. However, for a statistician's claim that the bias is insignificant to hold water, his/her assumptions must also be valid, which usually cannot be proven to absolute certainty.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
If I follow you correctly, you're saying that the granularity of the data is so large that the effect I'm seeing is too small to be certainly a result of anything more than random fluctuations. In other words, it may be real, or it may not be. I'm not sure how you determine how much randomness a given sample will exhibit. But anyway, not having the statistical background, I'll just take your word for it.
GO
I'm not sure that I would think about it in terms of "granularity." It's that the dataset is a sample from a population, and repeated samples will vary randomly. If you flip a coin ten times, you might end up with 6 heads and 4 tails; if you repeat the experiment, you might end up with 4 heads and 6 tails. Similarly, the present dataset is a sample. Let's focus on the 5.13 climbers. In that dataset, the ratio of strength lifters to others is 1.4. Comparing that ratio to the ratios among the other climbing grades, you hypothesize that there is a U-shaped relationship between climbing level and participation in strength training. But, in fact, the ratio of 1.4 is not statistically significantly different from a ratio of 4.0 (based on the sample size of 12 5.13 climbers). That is, if the true ratio is 4.0 (which would lead you to a completely different conclusion), the probability is still pretty high that we could have observed a ratio of 1.4 among a random sample of just 12 climbers. Due to the small sample size the "stability" of the observed ratio is poor. With larger samples stability improves, in accordance with the Law of Large Numbers. If we observed a ratio of 1.4 among 100 climbers, we could virtually certainly rule out that the true ratio was anywhere near 4.0.
Doesn't all this merely take a look at the training habits and attitudes of a handful of individuals at different levels of ability (and who happen to also post to this forum)?
To me the problem with all this is that it doesn't take a look at the individual and work forward from there.
For example, one can say, well, that person only climbs 5.9, so weight training is of no benefit, but perhaps without the weight training they would be climbing only 5.7, or conversely, perhaps they would climb 5.11 if they would simply dedicate all the training time to climbing only activities.
Without a controlled study, measuring pre and post performance levels, and applying two comparative training protocols, with a substantial test population, how do you arrive at any sort of meaningful conclusions?
I'm not sure how you determine how much randomness a given sample will exhibit.
One word: assumption.
Nothing in statistics works if you can't make an assumption (valid or not) about the underlying data distribution. As a frequentist, you can state the data biases toward lifting and climbing harder, and you'd be absolutely correct. However, for a statistician's claim that the bias is insignificant to hold water, his/her assumptions must also be valid, which usually cannot be proven to absolute certainty.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
If I follow you correctly, you're saying that the granularity of the data is so large that the effect I'm seeing is too small to be certainly a result of anything more than random fluctuations. In other words, it may be real, or it may not be. I'm not sure how you determine how much randomness a given sample will exhibit. But anyway, not having the statistical background, I'll just take your word for it.
GO
I'm not sure that I would think about it in terms of "granularity." It's that the dataset is a sample from a population, and repeated samples will vary randomly. If you flip a coin ten times, you might end up with 6 heads and 4 tails; if you repeat the experiment, you might end up with 4 heads and 6 tails. Similarly, the present dataset is a sample. Let's focus on the 5.13 climbers. In that dataset, the ratio of strength lifters to others is 1.4. Comparing that ratio to the ratios among the other climbing grades, you hypothesize that there is a U-shaped relationship between climbing level and participation in strength training. But, in fact, the ratio of 1.4 is not statistically significantly different from a ratio of 4.0 (based on the sample size of 12 5.13 climbers). That is, if the true ratio is 4.0 (which would lead you to a completely different conclusion), the probability is still pretty high that we could have observed a ratio of 1.4 among a random sample of just 12 climbers. Due to the small sample size the "stability" of the observed ratio is poor. With larger samples stability improves, in accordance with the Law of Large Numbers. If we observed a ratio of 1.4 among 100 climbers, we could virtually certainly rule out that the true ratio was anywhere near 4.0.
Jay
The "flipping the coin" comparison doesn't work here. You can't compare natural randomness with non-random, non-blind, internet survey data that is open to all sorts of validity issues. As I noted at the outset, statistically analyzing the data here is only useful for practicing statistics, it says nothing about climbers (the general population) and says nothing about the sample itself unless you're willing to accept all sorts of flawed of assumptions about who was responding, how, and why. Garbage in, garbage out.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
If I follow you correctly, you're saying that the granularity of the data is so large that the effect I'm seeing is too small to be certainly a result of anything more than random fluctuations. In other words, it may be real, or it may not be. I'm not sure how you determine how much randomness a given sample will exhibit. But anyway, not having the statistical background, I'll just take your word for it.
GO
I'm not sure that I would think about it in terms of "granularity." It's that the dataset is a sample from a population, and repeated samples will vary randomly. If you flip a coin ten times, you might end up with 6 heads and 4 tails; if you repeat the experiment, you might end up with 4 heads and 6 tails. Similarly, the present dataset is a sample. Let's focus on the 5.13 climbers. In that dataset, the ratio of strength lifters to others is 1.4. Comparing that ratio to the ratios among the other climbing grades, you hypothesize that there is a U-shaped relationship between climbing level and participation in strength training. But, in fact, the ratio of 1.4 is not statistically significantly different from a ratio of 4.0 (based on the sample size of 12 5.13 climbers). That is, if the true ratio is 4.0 (which would lead you to a completely different conclusion), the probability is still pretty high that we could have observed a ratio of 1.4 among a random sample of just 12 climbers. Due to the small sample size the "stability" of the observed ratio is poor. With larger samples stability improves, in accordance with the Law of Large Numbers. If we observed a ratio of 1.4 among 100 climbers, we could virtually certainly rule out that the true ratio was anywhere near 4.0.
Jay
The "flipping the coin" comparison doesn't work here. You can't compare natural randomness with non-random, non-blind, internet survey data that is open to all sorts of validity issues.
Validity issues notwithstanding, the data are still a sample and thus subject to random variation.
Another non-lifter pulling at 5.11+routes/V6boulder.
I would just add that pumping iron is not going to make me a better climber. Sure I do pullups, core exercises, squats, and other exercises, but I have no desire to go do some bench presses or curls.
I see these weight-lifters come into the gym all the time. They take off thier shirts to show off some muscle, then struggle with the 5.10's. They cant edge on a dime, smear, crimp, they have no balance, no finger strength.....all they can do is jug haul.
If one wants to climb hard, climb all the time. If one wants to look muscular, go lift weights.
You are over-interpreting a small dataset. There is no evidence in this dataset for any relationship between climbing level and type of weight training.
Jay
Rather than just dismissing out-of-hand an apparent relationship, would you care to explain why this relationship doesn't count?
The relationship you think you see is not even remotely statistically significant. Like I said, the dataset is too small to support the relationship you think you see.
If I follow you correctly, you're saying that the granularity of the data is so large that the effect I'm seeing is too small to be certainly a result of anything more than random fluctuations. In other words, it may be real, or it may not be. I'm not sure how you determine how much randomness a given sample will exhibit. But anyway, not having the statistical background, I'll just take your word for it.
GO
I'm not sure that I would think about it in terms of "granularity." It's that the dataset is a sample from a population, and repeated samples will vary randomly. If you flip a coin ten times, you might end up with 6 heads and 4 tails; if you repeat the experiment, you might end up with 4 heads and 6 tails. Similarly, the present dataset is a sample. Let's focus on the 5.13 climbers. In that dataset, the ratio of strength lifters to others is 1.4. Comparing that ratio to the ratios among the other climbing grades, you hypothesize that there is a U-shaped relationship between climbing level and participation in strength training. But, in fact, the ratio of 1.4 is not statistically significantly different from a ratio of 4.0 (based on the sample size of 12 5.13 climbers). That is, if the true ratio is 4.0 (which would lead you to a completely different conclusion), the probability is still pretty high that we could have observed a ratio of 1.4 among a random sample of just 12 climbers. Due to the small sample size the "stability" of the observed ratio is poor. With larger samples stability improves, in accordance with the Law of Large Numbers. If we observed a ratio of 1.4 among 100 climbers, we could virtually certainly rule out that the true ratio was anywhere near 4.0.
Jay
The "flipping the coin" comparison doesn't work here. You can't compare natural randomness with non-random, non-blind, internet survey data that is open to all sorts of validity issues.
Validity issues notwithstanding, the data are still a sample and thus subject to random variation.
Jay
Sorry Jay, you know I agree with you about a lot of things. I also respect the work behind your statistics here. But any social-psychologist would tell you that biased data (eg, data gathered the way it's gathered here) does not have random variation. That's precisely the problem, the variation is biased by a slew of factors the primary one being a faulty instrument. You can't measure anything accurately if the instrument isn't valid.
You can't measure anything accurately if the instrument isn't valid.
That's why he said "validity issues notwithstanding."
Jay, thanks for the explanation. Let me see if I understand now:
1 - If your data shows significant variation between individuals in a population, then a sampling of that population will by definition include a large degree of random variation between individuals.
2 - In a single small survey, the only way to distinguish between a true trend and this natural random variation is if it is a very very large trend.
Is there a simple formula to compare, say, the ratio of the divergence to the size of the sample, to determine how large a divergence you must see in order for it to be considered outside of natural variation?
You know - not something that's always exactly right, but if there's a rule of thumb that could be applied, that would be very handy for someone without a statistical background. I imagine there isn't - it's probably more complicated than that. But if there was it would be nice to know.
[B]iased data (eg, data gathered the way it's gathered here) does not have random variation.
That statement is patently false. Bias and random variation are separate phenomena, and both exist in every measurement in practice. Certainly neither negates the existence of the other.
Say X is a random variable. And let's say (arbitrarily) that it has a normal distribution, with mean µ and variance s². That is, X ~ N(µ, s²). Let's say that X is measured with bias: an additive bias, a, and a multiplicative bias, b, so that what we actually observe is Y, where Y = a + bX. Then, our observed value Y will still be normally distributed, but with mean a + bµ and variance b²s². The presence of a and b did not remove any random variation from our measurement. Indeed a had no effect on the random variation, while b actually magnified it.
In reply to:
You can't measure anything accurately if the instrument isn't valid.
I agree, and neither Gabe nor I are under any illusion that these data have a great deal of external validity. But that does not negate the fact that you can apply statistics to them. Statistics deals with the random variation, which is still there, as illustrated in the above example. That's also why my earlier analogy with the coin toss, which you criticized, is valid. After all, even a biased coin would still have random variation.
Jay
(This post was edited by jt512 on Nov 18, 2009, 5:15 PM)
... and neither Gabe nor I are under any illusion that these data have a great deal of external validity.
Ooh, I like that phrase!
Next time someone comes back after getting spanked on a route I recommended, and tells me I'm a douchebag sandbagger, I'll tell them I'm not talking out my ass. I'll say "What I told you was perfectly correct within my data set, it simply didn't exhibit a great deal of external validity!"
[B]iased data (eg, data gathered the way it's gathered here) does not have random variation.
That statement is patently false. Bias and random variation are separate phenomena, and both exist in every measurement in practice. Certainly neither negates the existence of the other.
Say X is a random variable. And let's say (arbitrarily) that it has a normal distribution, with mean µ and variance s². That is, X ~ N(µ, s²). Let's say that X is measured with bias: an additive bias, a, and a multiplicative bias, b, so that what we actually observe is Y, where Y = a + bX. Then, our observed value Y will still be normally distributed, but with mean a + bµ and variance b²s². The presence of a and b did not remove any random variation from our measurement. Indeed a had no effect on the random variation, while b actually magnified it.
Well put. I oversimplified. My point is that the bias here overwhelms the argument about random variation, given that that argument was forwarded as a means of generalizing the data. The only generalization to be made here is that crappy instruments collect crappy data and data collected in a similarly crappy way will be comparable.
jt512 wrote:
sidepull wrote:
You can't measure anything accurately if the instrument isn't valid.
I agree, and neither Gabe nor I are under any illusion that these data have a great deal of external validity. But that does not negate the fact that you can apply statistics to them. Statistics deals with the random variation, which is still there, as illustrated in the above example. That's also why my earlier analogy with the coin toss, which you criticized, is valid. After all, even a biased coin would still have random variation.
Jay
I agree that you can apply statistics to it. Note in my response that I applauded your efforts. What I am arguing against is that people are discussing the results as if they mean something. In other words, the stats have provided a false veneer of legitimacy while ignoring all sorts of validity issues. The caveat "validity issues aside" simply means that the data and any conclusions from them should be ignored. Just because you can teach a kid to add two rotten apples doesn't mean eating them will taste any better.
Is there a simple formula to compare, say, the ratio of the divergence to the size of the sample, to determine how large a divergence you must see in order for it to be considered outside of natural variation?
You know - not something that's always exactly right, but if there's a rule of thumb that could be applied, that would be very handy for someone without a statistical background. I imagine there isn't - it's probably more complicated than that. But if there was it would be nice to know.
I think one seat of the pants validity check is to say "if one or two of my data points changed, would that change my conclusion."
E.g. If I'm counting how many cars on the road have one headlight out and I look at ten cars, then I have to be careful about saying "10%." After all, one or two cars could change that to somewhere between 0%-30%.
What I am arguing against is that people are discussing the results as if they mean something. In other words, the stats have provided a false veneer of legitimacy while ignoring all sorts of validity issues. The caveat "validity issues aside" simply means that the data and any conclusions from them should be ignored. Just because you can teach a kid to add two rotten apples doesn't mean eating them will taste any better.
I don't think that these data are completely useless. Certainly as quantitative estimates they are meaningless, but I think that they are qualitatively informative about the population of climbers whom the data are drawn. For instance, I think we can say that among 5.10–5.12 climbers in this population, there probably is no strong relationship between climbing level and weight lifting behavior.
Jay
(This post was edited by jt512 on Nov 18, 2009, 6:13 PM)
You can't measure anything accurately if the instrument isn't valid.
That's why he said "validity issues notwithstanding."
Jay, thanks for the explanation. Let me see if I understand now:
1 - If your data shows significant variation between individuals in a population, then a sampling of that population will by definition include a large degree of random variation between individuals.
I don't understand that statement, because your data is a sampling of the population. Perhaps what you are getting at is that random variation in the sample will reflect the variation in the population. That is true.
In reply to:
2 - In a single small survey, the only way to distinguish between a true trend and this natural random variation is if it is a very very large trend.
Exactly. As an example, let's say we take a sample of n 5.10 climbers and n 5.12 climbers, and we calculate p1, the proportion of the 5.10 climbers who lift weights, and p2, the proportion of the 5.12 climbers who lift weights. We want to know whether the difference between p1 and p2 is statistically significant. Using a common statistical criterion, we would say that p1 and p2 are significantly different if the following inequality is true
So if | p1 – p2 | is small, n needs to be larger for statistical significance than if | p1 – p2 | is large.
In reply to:
Is there a simple formula to compare, say, the ratio of the divergence to the size of the sample, to determine how large a divergence you must see in order for it to be considered outside of natural variation?
GUed. The above formula works for pairs of proportions, when the sample size is n for each of the two groups being compared. If the sample sizes are different, you can use
If you intend to actually use these formulas, you should probably be aware that "significant" here means "significant with 95% confidence," which means that if p1 and p2 are really the same in the underlying population, and you take a sample from that population, you will have a 5% chance of erroneously concluding that p1 and p2 are different. That's where the 1.96 comes in. Larger values, ie stricter criteria for significance, reduce the chance of erroneously concluding that the difference is true.
You should also be aware that the above formulas are not valid if p1 or p2 are too close to 0 or 1, or n1 and n2 are too small. As a rule of thumb, the p's should be between 0.1 and 0.9, and the n's should be at least 10. Many statisticians would recommend even greater n's.
Another non-lifter pulling at 5.11+routes/V6boulder.
...fair enough,
i_h8_choss wrote:
Sure I do pullups, core exercises, squats, and other exercises...
...hang about, doesn't your second statement contradict your first?
No, Lifting is gym membership with bench press, curls, lat pulldown, free weights, machines, spotters, etc.
Doing some pullups, core exercises, and squats is not weight lifting.
This is false.
Yeah you're right.
So next time I stay in my apartment and do 100 pullups, 300 situps and do some squats and lunges(without weights) , Im going to call up my buddy and say " hey you want to come over and lift weights?"
I cant wait to see the look on his face when he shows up and sees that I dont own any weights.