First, My apologies to you both for the delayed response (currently working two seperate efforts on different networks).
I think I just failed to emphasize the filtering part of my basic question. I've revised the data set in the initial problem to more accurately represent the kind of data I'll be getting, but to emphazise: for any given data set, across many data sets, I know that most of the data should fall into a single range (what that range is will be different between data sets), but with measurement tolerances some spurious data will have gotten through my first filter and be in other ranges. Thus my idea for a second filter is to lean on histcounts to give a low-computational-cost estimate of the data groups, and pick the highest-count group (where each group is defined as a contiguous set of hist bins) as the valid data, rather than junk.
I'm currently trying out each of your solutions to see which gets me closer to what I'm actually going for.