Many thanks for the pointer, I shall discusss it with our local tech disussion group here tomorrow morning. One of the guy's, the PhD candidate in tehrapy audio analysis, will be most ineterested as well. Our project is indeed becoming ever more interesting, like an onion, peeling away layers of complexity. The key difference using our DSP methodology is that the spctrum has few if any harmonic cycles to identify, being of a low deviation level at a thin range of dB acros sht ewhole specturm. Thus each frequeny is much like th elast, giving no pattern to cross-correlate. At this stage we do not think machine learning will give us much either without some way of getting a pattern to match somewhere? One suggestion has been to take a series of very thin slices of frequencies when we know the squeak occurs using audacity as our source range, but as I say, the y-xis dB for every frequncy deviates very little, hence it is just "squelch"?
I hope to gain something from the research you mention, many thanks.
All the best,