Flagging an update (coming) to Big News Part III
Score 1 for open science review, thanks to Bernie Hutchins, an electrical engineer who diligently asked the right questions about something that bothered him regarding the notching effect. We’re grateful. This will improve the model. On the downside, it means we’re slightly less certain of the delay (darn) — the notch doesn’t guarantee a delay as we had previously thought. But there is independent evidence suggesting temperatures on Earth follow solar activity with a one cycle delay — the lag seen in studies like Archibald, Friis-Christainsen and Usoskin is still a lag.
What does it mean? The step-response graph (figure 2 in Part III or figure 4 in Part IV) will change, and needs to be redone. The reason for assuming there is a delay, and building it into the model, rests now on the independent studies, and not on the notch. The new step change will need to be built into the model, and in a few weeks we’ll know how the predictions or hindcasting change. David feels reasonably sure it won’t make much difference to the broad picture, because a step-response something like figure 4, Part IV, explains global warming fairly well and will presumably arise again with the new method. But obviously it needs to be done to be certain.
The irony is that it was the FFT (Fast Fourier Transform) that produced what appeared to be a “non-causal” notch (and if it was non-causal, it would necessarily mean a delay). If David had used his slower OFT (Optimal Fourier Transform) the mistake might not have arisen, because, unlike the DFT and FFT, the OFT uses frequencies whose periods do not divide exactly into the length of the time series. In one of the those quixotic paths to understanding, the incorrect clue of a solar delay of one cycle “fitted” with the other evidence, and possibly David wouldn’t have seen the pattern if he’d used the OFT.
The previous post Big News Part III needs correcting (which is coming), and Bernie Hutchins needs a big thank you. Without his time and effort in comments, David would not have spotted the problem in the code. And it’s so much the better to know it sooner rather than later. — Jo
David Evans replies to Bernie Hutchins
That graph of the phase of your transfer function matches mine, so that pretty much seals it at your end. You appear to be analyzing the correct notch filter.
Your remark that “might have meant the DFT and not the FT” was an important clue. I had used DFT/FFT’s to get the spectrum of the step function. The DFT implicitly assumes a time series repeats to infinity — because the DFT uses only frequencies whose periods exactly divide into the length of the time series. The response I was calculating was therefore of a series of rectangular pulses, even though I was calculating the responses of a times series that was all zeros in the first half and all ones in the second half (so it looked like a step function). When I check the spectrum of what I thought was the step function, I now see that it has only the first, third, fifth etc harmonics — just like the Fourier series of a train of rectangular pulses. The amplitudes have the 1/f dependence like the spectrum of a step function, but the even-numbered harmonics are missing — but they are not missing in the spectrum of a step function.
So that finally resolves the discrepancy. My response was to a train of rectangular pulses and was incorrect. Yours was to a step function, and presumably is correct (it makes sense intuitively, but I haven’t checked it numerically yet).
Bernie, thanks for helping me find the bug, and thank you for your persistence. Well done! I am in your debt. I asked many people to check it, but you are the only one to find the problem. It is good to get this sorted out. (One other person calculated a response in Mathematica but the response seemed to come from infinity near t=0 as I recall, so something was wrong there.)
Everything else about the method worked, so my usual checks didn’t find any problems. The low pass filter and delay work just as well on a train of pulses as on a step function, so they appeared ok. Just changing the length of the time series usually exposes problems like this, because extending the time-series should not make any difference, so if it does there is a problem. In this case I had pushed it out from its usual 200 years to 20,000 years, and it made no difference. Changing the rate at which the function was sampled also made no difference. So it all seemed to check out numerically. It turns out the spectrum of a series of pulses is fundamentally different to a step function, so I was just consistently getting the wrong answer.
Odd that I didn’t spot it earlier. I am usually very aware of that particular phenomenon: I mention it specifically in the main paper, I took care when using the model to simulate temperature to use step responses and NOT work in the frequency domain so as to avoid any possibility of such a problem, and a major advantage of the OFT is that it avoids this type of problem because it uses frequencies other than those whose periods divide exactly into the length of the time series. Let my guard down here because I used an FFT for speed (the “step function” time series is 8,000 points long).
There may have been some confirmation bias at work. In the development I had already realized early on that there seemed to be a delay just from fitting TSI-driven models with notches to temperatures. So when I computed the “step response” of any simple notch filter and “found” it was non-causal, that seemed like the answer. Simple. Computing the spectrum of a general notch is difficult, and the proof was left on a long “to-do” list.
So what does this mean for the notch-delay model? Possibly not much, but there will be a delay (ahem) while I recompute things and update the model and graphs.
The causality of a notch doesn’t support the mandatory nature of the delay that I thought I had established, but there is plenty of other evidence to suggest a delay is needed. There are half a dozen independent findings of a delay around about 11 years in the literature, and a solar influence fits better with a delay (e.g. Lockwood and Froehlich 2007 — they show that solar influence doesn’t fit without a delay). Either the sun has little effect beyond the small TSI variations, and we’re left with CO2-driven climate models that don’t work and a mystery about all the warming before 1900, or there is some solar effect that appears to be delayed from TSI changes. While the notch does not *necessarily* mean a delay, certainly the *possibility* of a delay is strong. So there is sufficient reason to include a delay in the solar model — so the model remains as before, with no change. (Clarification: the form stays the same, the parameters will vary).
What has changed is the link between the model’s transfer function (fitted to the empirical transfer function) and the model’s step response (use to compute temperatures). This is broken, and I’ll have to find another way to compute the step response from the model’s transfer function, then rerun all the optimization and so on. It might take a few weeks before it’s all fully sorted out.
(By the way, finding the step response numerically from a transfer function is difficult. The DFT/FFT turns out to be unhelpful, because it uses frequencies whose periods exactly divide into the time series length. The correct method might involve numerical integration, to imitate the Fourier inversion integral directly. These MIT course notes discuss ways of doing it, solving differential equations, and notes that the general purpose method in MATLAB method fails sometimes (page 23, problem 3) – implying there is no easy method that always works. Note that we need to find the step response of not just a notch filter, but a notch combined with a low pass filter and delay in a particular configuration, for which an analytic solution is unlikely–though I’ll have a go.)
Fitting the solar model to the observed temperatures and the empirical transfer function will presumably produce broadly the same results, and again find that a step response vaguely like the one found previously fits the observed temperatures best. So I expect the theory still broadly holds.
In particular, an eleven year smoother with an eleven year delay will likely still be a crude approximation to the upcoming reparameterized model, so the marked fall in the solar radiation trend somewhere around 2004 is still likely to point to a significant temperature fall starting around 2015 – 2017. But until the results of the re-optimization are finished again there is no point in speculating further.
This is a triumph of open science, in my opinion. Many eyeballs in this case found a problem that review by several peers did not (though because it wasn’t an official peer review, expectations and standards would have been different).
On the whole, aside from the obvious benefit of now being closer to the truth, this is a good development for the notch-delay solar theory. I think people will find the hypothesis of a delay easier to accept from disparate observations and a good fit, rather than an unfamiliar mathematical argument (more of a numerical argument really, which turned out to be incorrect anyway).
Again, thank you Bernie for helping me get to the bottom of this. — David
REFERENCES:
Archibald, David, http://www.davidarchibald.info/papers/Past-and-Future-of-Climate.pdf, 2010
Archibald, David, “Solar Cycles 24 and 25 and Predicted Climate Response”, Energy and Environment, Volume 17 No. 1, 2006, pages 29–35
Friis-Christainsen, E.; Lassen, K. ,(1991) “Length of the Solar Cycle; An Indicator of Solar Activity Closely Associated with Climate”, Science, , pp. 698-700
Lockwood, Mike; Froehlich, Claus, “Recent oppositely directed trends in solar climate forcings and the global mean surface air temperature”, Proceedings of the Royal Society, 2007
Moffa-Sanchez, Paola; Born, Andreas; Hall, Ian R.; Thornalley, David J.R.; Barker, Stehe, “Solar forcing of North Atlantic surface temperature and salinity over the past millennium”, Nature Geoscience, 2014, Supplementary Information
Solheim, Jan-Erik; Stordahl, Kjell; Humlum, Ole, “The long sunspot cycle 23 predicts a significant temperature decrease in cycle 24″, Journal of Atmospheric and Solar-Terrestrial Physics, 2012
Soon, Willie W.H., “Solar Arctic-mediated Climate Variation on Multidecadel to Centennial Timescales: Empirical Evidence, Mechanistic Explanation, and Testable Consequences”, Physical Geography, 2009, pp. 144-184.
Usoskin, I. G.; Schuessler, M.; Solanki, S. K.; Mursula, K., “Solar activity over the last 1150 years: does it correlate with climate?”, Proc. The 13th Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, Hamburg, pp. 19 – 22, 2004
Usokin, I. G., M. Schuessler, S. K. Solanki, and K. Mursula 2005, Solar activity, cosmic rays, and the Earth’s temperature: A millennium-scale comparison, Journal of Geophysical Research, 110, A10102.
Well done indeed (Mr?) Hutchins!
And bravo to Dr. Evans for being willing to accept a robust alternative view.
330
It’s heartening to see some genuine peer review conducted by crowd sourcing a hypothesis. This utterly transparent procedure must be the way forward to restore a badly battered public confidence in the scientific process. No doubt there will now be posts claiming that Mr. Hutchins is wrong – I would be disappointed if there weren’t. Both the constructive and destructive criticism of David’s work will ultimately serve the same end of testing the hypothesis with a rigour foreign to so much produced in Climate “Science.”
370
‘conducted by crowd sourcing’, and this method should be rejected to the dustbin where it belongs, because you’ll get any tom, dick or harry of a pseudo scientist elevating their favourite, untested, unreviewed myth to the level of genuine research!
113
….”you’ll get any tom, dick or harry of a pseudo scientist elevating their favourite, untested, unreviewed myth to the level of genuine research!”
It’s already happening. Lewandowsky and Orestes, the former a writer of hysterical fiction, the latter a writer of historical faction, have attached themselves to a “scientific” paper and Lovejoy has produced a paper claiming that natural variation has masked CO2-produced global warming since 1998. Neither paper can be considered as genuine research. Both papers will be puffed by a complicit MSM, proving once more that a Lie will be half-way round the World before the Truth can get it’s boots on.
160
‘It’s already happening’, so you may wrongly say, I know not about the examples you have produced, just two examples of thousands which have been reviewed over the years. You have a very long way to go before you are even on level terms with the overwhelming evidence! In cricket terms, you’re not out of the pavilion yet, in fact you are not even padded up!
011
Is that your best effort? i was expecting at the very least another of your U-Tube videos.
30
BA4: Here’s another cricket term: most of what is getting published in “learned journals” in the last 20y is a load of balls.
Real peer-review science in climate seems to have been suspended sometime around the early 1990s.
If it were not for that, there would be no need for an alternative system.
20
Great Reply Kevin,
BA4 completely stumped!
40
i think it is actually vindication of the crowd source method .tell me how many peer reviews actually involve reviewing to the depth that would find this problem ? mbh98 tells me not many .reviewers only have the time to investigate if a papers proposition is based on theoretical possibilities .
20
I think I was one of the few following that discussion. Because most folks had gone on to other posts. I had thought Bernie was being obtuse – not being a math guy. Excellent to find out I was wrong. I like learning stuff. And this open science with integrity stuff is quite something. I’d love to see more of it. Not just around here. LS take note. It is OK to be wrong. All you need to do to fix it is change your mind. The less we have of this the better:
“Truth never triumphs — its opponents just die out. Thus, Science advances one funeral at a time” Max Planck
The triumph of truth makes evolution faster.
290
David,
Is this correct?
I was under the impression that you create an ideal step function with only odd harmonics in phase and decreasing in amplitude as 1/f. And of course a non-ideal step function – significant rise time – has even harmonics as well. Until you get to the sawtooth function which has even and odd harmonics.
For engineering purposes we use frequency-response = .35/rise-time as a rough rule of thumb for designing digital circuits where pulse fidelity is not critical.
60
The Fourier transform of a (unit) step function is
0.5 * delta(f) – i / 2 * pi * f.
Ignore the first term — it’s just the average value of the step function (namely 0.5), the dc term (the cosine at f = 0).
The second term describes the sines: their amplitudes decrease as 1/f, but they are all there.
A train of rectangular pulses is the classic example of Fourier analysis, but has only odd-numbered sines (no even -numbered harmonics) – see here.
100
Thanks!
40
Hmm …where did I read that plans may be sometimes
be amended and improved by those who were not the
leaders in its creation? – Probably Bertrand Russell.
Example here of the value of open society science compared to siege mentality … er… ‘science.’
210
BTW the time stamps here are still 11 minutes fast compared to UTC.
20
Many servers don’t seem to update their clocks with any regularity. I suspect they may rely on manual entry and updating of the time — not exactly state of the art when utilities to do the job on any schedule you want are so readily available and network delays are just milliseconds.
Most PCs have abysmal clocks and Windows will only update once a week. So for years I’ve used a utility that updates the time when I log on and every 24 hours after that, thereby ensuring accuracy within a second all day long. The only problem with user mode clock utilities is that you have to go into the policy settings of Windows and authorize users to set the clock or they won’t work (or you run as an administrator, which is a practical necessity for software developers).
It gets worse when Verizon, my wireless carrier doesn’t keep the network time correct and it drifts all around.
10
I was left a bit puzzled when David explained his use of the Fourier transform as opposed to the Laplace transform for aspects of his model (the Laplace transform could be viewed as a more ‘universal’ way of looking at things without the time domain limitations of the Fourier series). I am still a bit puzzled by how the physical model might explain the phase response dictate by the notch and the delay filter. While for example the magnitude of any 22yr cycle is only moderately attenuated, the model does show a big phase shift for that component. The delay filter naturally exhibits rapid increases in the phase response as higher frequencies are considered and while that is quite true and to be expected, and is more of a mathematical curiosity it is still a bit hard to grasp just what sort of ‘Force X’ or physical process not simply cancels out the magnitude of the 11 yr cycles but also results in the significant phase shifts of say the lower frequency 22 yr cycles as predicted by the model. I am still not comfortable with David’s lower frequency spectrum (and resulting observations) extending so far given the short time window of data. That is a Fourier vs Laplace issue too.
If the periodicity of the sun spot activity was indeed described by a random period which fluctuated between the limits described and observed then it would seem a little complex to let that random variation of frequency be modeled by a necessarily broader ‘notch’, the random nature of the input requiring an ever longer time series to be accurately modeled or indeed a Laplace analysis. Don’t the observations suggest that the ‘delay’ is not a complex function (also requiring a complex phase response) but more simply ‘exactly’ related to an anti-phase forcing at the same time as the primary measured TSI forcing? For example if there was some difference in heating response to say the UV (as speculated)and that response was essentially an anti-phase response (and anti-phase forcing) to the primary spectrum of the measured TSI, ie caused less heating or otherwise cooling and we might expect these UV emissions to follow the same (somewhat) random periodicity of the primary TSI activity. So in reality the Force X would not be a function of the last period of TSI but coincident with the current period. In other words the model would describe an input forcing having two components of TSI or similarly having two distinct forcings with the same (somewhat) random periodicity only with an anti-phase relation between the two. The current model essentially embodies these possibly simple phase and anti-phase forcings (of varying periodicity)into a quite complex ‘Force X’ function which not only has to represent this (potential) anti-phase relation of components in the TSI but has to have a frequency response to represent the measured ‘jiggle’ in the 11 yr TSI periodicity. That representation of the randomness is never going to be accurately represented with a limited time series and will certainly be different for the different length time series which David uses to arrive at the model.
So what physical mechanism (force X) could (also) cause the complex phase response at the lower frequencies between say the LP rolloff and the notch centre? I know it is a little bit of a theoretical ask if there are no significant frequencies in the input outside the 11 yr centre but the model does say the phase response occurs when there is essentially no magnitude effect?
50
Wow…
I feel like I’m really witnessing history here.
This is science as it was meant to be.
This is more like it!
Great minds coming together to overcome a great clash of egos and ignorance.
So Michael Mann, where”s your data?
A Team will always win, over individuals.
Well done David and Jo.
181
A team still needs a leader.
42
David,
For a step function and your Manual Fourier transform. Where you pick the frequency, there is a fixed residue at each frequency, for that step outside of your interval. I wish I could remember, but I found it in literature long ago. Such may help with the clarity and accuracy of your revision.
10
Dr Evans should be congratulated and this should be used as an example to the Climate Science clique and the IPCC.
It is so refreshing to see science done this way. Totally open source. No obstruction to data and coding al la M Mann, J Gergis et al (incl D Karoly), S Lewandowsky & J Cook, Phil Jones etc etc. No claims of lost data. No ad homs like calling those who point out flaws schoolboys who practice Voodoo science.
Just a preparedness to put the theory out there with all the supporting material and invite input. Consider the input in a rational manner and where necessary acknowledge where amendments or corrections need to be made. Thank the appropriate parties and get on with the science. Brilliant!
221
This is the peer review process at its finest…
131
Congratulations to Bernie Hutchins for his diligence and to David Evans for his honesty and integrity. The final product will be better for it.
200
What Dr. Willie Soon thinking about TSI? Maybe of Article is worth?
http://www.ustream.tv/recorded/49735731
Can be seen that the effect of sun in the atmosphere is enhanced due to changes in the UV and GCR over longer periods of time. UV and GCR ionize the atmosphere in different areas. UV is stronger in the equatorial zone, and GCR at the poles. The decrease in UV and a simultaneous increase in the GCR circulation changes in the atmosphere (stratospheric waves).
http://cosmicrays.oulu.fi/webform/query.cgi?startday=01&startmonth=06&startyear=2000&starttime=00%3A00&endday=01&endmonth=07&endyear=2014&endtime=00%3A00&resolution=Automatic+choice&picture=on
http://www.swpc.noaa.gov/SolarCycle/sunspot.gif
40
Worth seeing south polar vortex at an altitude of 17 km.
http://earth.nullschool.net/#2014/07/25/0600Z/wind/isobaric/70hPa/orthographic=-12.99,-135.80,318
UV radiation depends on the activity spots (not on their number).
http://www.spaceweatherlive.com/en/archive/2003/10/28/rsga
Delay temperature decrease may be due to the inertia of the zone of the ozone layer.
http://www.esrl.noaa.gov/gmd/odgi/odgi_fig3.png
30
I applaud the open science and open peer review process!
For those unable to understand the stance taken by The Team and the IPCC, change your viewpoint for the sake of analysis. Look at their actions as though they were trying to deceive in the first place, so that money and power could flow to their fellow conspirators. If the dots line up, then perhaps this analysis would be the correct one….
60
Us see blockade of the southern polar vortex in the region of Australia, at an altitude of 26 000 and 20 000 m.
http://www.cpc.ncep.noaa.gov/products/intraseasonal/temp10anim.gif
http://www.cpc.ncep.noaa.gov/products/intraseasonal/temp50anim.shtml
Note the waves, which induces the lock in the stratosphere.
20
Wow, just wow. A scientist did exactly what he said he was going to do.
AMAZING!!!!
Bernie, thank you for your tenacity, and maturity. No slams or slander, nothing “extra” added to the discussion. This might have happened much earlier if everyone wasn’t so distracted by the ego wars.
Hopefully it can move forward in a more professional manner.
130
I agree. I followed the exchanges between David and Bernie. Bernie persevered in a polite and and intelligent manner, always sticking to the science. He had to keep at it for quite a few comments before his points started getting traction. Many would have given up or resorted to spiteful sniping from the sidelines.
130
actually David said he was going to release the out of sample testing and the code.
he has done neither
17
That’s inaccurate.
He has done neither [to the best of my knowledge]. Accurate.
He has done neither [as of this date]. Accurate.
He has done neither [on MY time schedule]. Quite accurate.
He has done neither [even after my persistent whinging]. Sadly accurate.
He has done neither [just to piss me off]. Speculation on my part.
41
No its quite accurate.
why defend his mannian tactics?
and what will you bozos say when another flaw is found in model construction?
how much like the IPCC does this joker have to get before you throw him under the bus
01
Some 45 years ago as part of a telecomms course I had to analyse the output from a square wave generator using a spectrum analyser. Anticipating all the odd harmonics I was surprised to get a low amplitude of even harmonics extending up the spectrum. Told to just report them as ‘inharmonic partials’ I instead used Fourier analysis and established that the real cause was that the ‘Square Wave’ did not have equal mark to space ratio. It was a rectangular wave.
Only if the mark/space ratio is 1:1 can it be termed a square wave comprising the odd harmonics to infinity, otherwise there will be an even harmonic component persisting.
Too long ago now to remember the details unfortunately.
Thanks for stirring the grey cells.
60
Did you see David’s reply to me up thread? http://joannenova.com.au/2014/07/notching-up-open-review-improvements-a-correction-to-part-iii/#comment-1515498
30
Ray – good for you! Too often instructors openly invite students to ignore any inconvenient anomaly, especially if it is small! (Inharmonic partials indeed). The really good students want to know why they are getting a strange result – not just that they won’t have points deducted 😉 .
We had a lab experiment (not my design) where students sampling a square wave and computed the Fourier Series with a Pascal program given them. Not a great experiment, but the general idea served; they expected and mostly got 1/k, purely imaginary (because of odd symmetry), odd harmonics. But there was a small, constant, real part to the FS. I was asked to find the problem with the Pascal program, but I knew immediately (having seen it before) that it was that the program assumed samples were taken at the middle of sampling intervals, effectively offsetting the square by 1/2 a sample. That is, a symmetric narrow pulse train was added (small real values for all harmonics).
How inconsiderate of a program or formula to do what it is supposed to do instead of what you WANTED it to do.
50
Looks like I’ll have to be the one to make negative comments:
Whilst the “notch means delay” error has been removed there is still the problem of lack of evidence for a notch.
Splitting frequencies of TSI variation into 4 regions isolates the issue to one of those regions:
A: Very Low Frequencies (long-term trends): nobody could object to Temp following TSI
B: 11-year oscillations: few would object to lack of signal
C: Multi-annual variations (e.g. periods around 5 years): YOU HAVE TO SHOW POSITIVE EVIDENCE OF CORRELATION
D: High frequency oscillations: nobody would object to lack of signal due to thermal inertia
Region C (key to the notch) is sandwiched between two that have no signal,
and surely it is much more likely that region C also has no signal, i.e. there is no notch,
just a low-pass filter.
I can’t take this seriously until evidence for a notch is presented.
30
Hi Mikky,
And so you did. But you forgot to say something worth reading and you forgot to support what you said with equations or arguments that can stand scrutiny…
31
I expect evidence for a notch to be forthcoming. And if I can wait, so can you.
And so you know — I realize that David’s whole theory could fall apart if the future does not support it. But in that case his theory will be no worse off than the one you no doubt prefer, will it? A failure is a failure.
50
On here I say:
I can’t take this seriously until evidence for a notch is presented.
On “consensus” propaganda sites I say:
I can’t take this seriously until there is a clear anthropogenic signal.
Scepticism has to be consistent, otherwise it too is propaganda.
The future is NOT the key to the notch hypothesis, there is too much “other stuff” going on in the climate system.
I suspect its this “other stuff” (acting as noise), coupled with VERY weak TSI signals,
that makes the notch a highly unlikely guess rather than a credible hypothesis.
60
OK! You are correct. Evidence is the key to the truth all the way around. So I suggest that we both wait for the whole detailed explanation of the solar theory before jumping to conclusions.
40
As a separate matter, you are one of only a small number who will answer me if I criticize their point of view. And I respect that, even though I may disagree with you.
50
“A team of researchers from the Laboratory of Solar System Physics and Astrophysics of the Space Research Centre in the broad international cooperation for years conducted a comprehensive study of the heliosphere. A recently published the results of their next step.
The reaction of neutral atoms photoionization by ultraviolet radiation of the sun is important both for the understanding of photochemical processes in the upper atmosphere of the earth, and in the physics of the heliosphere. Exact consideration of photoionization is necessary to include to understand how the Sun modifies the gas streams, coming to us from the interstellar medium. Taking into account these modifications, and with measurements of interstellar gas streams carried in the interior of the solar system, we can infer what is happening in the interstellar cloud surrounding the sun. Since the flux of solar UV radiation varies strongly in time to the 11-letniegu cycle of solar activity, we need to know the rate of photoionization in the space of a few years before such observations, and if you want to compare the results of different observations spaced in time, then over the decades. Unfortunately, a sufficiently long series of such measurements is not, so in the past, often based on individual estimates or ionization treated as an additional unknown parameter.
Direct measurements of solar radiation in the field responsible for the photoionization, possible only from outside the earth’s atmosphere, are technically complex, inter alia, due to the fairly rapid and difficult to grasp changes in the characteristics of the detectors. Daily measurements of the solar spectrum in the far ultraviolet range is performed only since 2002 (NASA’s TIMED); since the mid-90s (ESA SOHO spacecraft) measurements were performed in part spectral range. Fortunately, changes in the solar spectrum in the range responsible for photoionization are correlated with the radio radiation of the Sun in the field decymetrowym which is accurately measured by telescopes on Earth since 1948.”
http://iopscience.iop.org/0067-0049/210/1/12/article
10
How fortunate we are to have folks such as Jo and David (and so many others posting here) of such class and intellectual honesty on the skeptics’ side.
And this “open review” does seem to work well. The only thing I would note is that my comparison of the notch results (agreements and disagreements) involved a totally different set of tools (Matlab, pencil/paper, 50 year old texts, electrical components on a bench) than David was using. And where there WAS agreement, the curves overplotted to the point where I had to make sure I really had plotted both! Different tools makes the findings stronger.
I was delighted that in an email David said he had lived here in Ithaca for a time and remembered the hills and the snow. HA – the hills are still here. And a few days ago when I went out at 2AM to see if the deer were of a mind to leave any of our vegetables for us it was 46 degrees F – so not so sure snow was out of the question. (For those wondering, Ithaca is in New York, Northern Hemisphere, and it’s July!)
210
I was honored to be a part of that. And I must say this light on math EE learned something. Thank you for your perseverance.
50
Ithaca in New York State? Surely not. I thought Ithaca was the legendary island home of Ulysees.
10
Well – actually Ulysees is not even a super-hero person but rather the name of the township north of the Township of Ithaca! We also have nearby Syracuse, Rome, Romulus, and Marathon. Even Seneca, but probably that is Native American. But to see if you are “from these parts” can you correctly pronounce Skaneateles (not classical, but Iroquoian)?
20
So where did they dig up Chateaugay from? (Went to school with a fellow from there).
00
Bernie,
I still don’t see how a notch filter can be physically meaningful in climate. There’s still the causality problem.
Finding a lag from somewhere else or invoking a FactorX that is out of phase with SSN does not solve the fundamental problem.
Unless I’ve missed an episode in this duodecology it seems that this has not been addressed yet.
30
Greg –
I think there are two uses of “causality” here. The first pertains to the temporal relationships between input/output, and this is all that may have been resolved here. Engineering.
The second use is in the sense of BE-cause (instigation, or a reason for something occurring). This is open.
Indeed notches in nature may be hard to find. Try to draw a mechanical analog of one using springs, dampers, and masses (and all the strings and pulleys you wish!).
20
If you are allowed to use an acoustic delay line (a pipe) it is pretty easy.
01
An acoustic delay line with minimal attenuation and a flat frequency response.
So all that is needed is a physical effect that can store SSN related FactorX for about 5.5y without noticeable high or low frequency attenuation, or phase distortion (other than the delay function).
Any suggestions as to what the might be?
AFAICS the notch has no physical parallel and is a misinterpretation of the original analysis.
10
Very good point. PERIODIC notches (comb filters) do occur in nature. The same mechanism (a delayed signal recombined with a direct signal) is familiar as multi-path fading (destructive interference), and as the musical special effect variously known as phasing, flanging, or “jetsounds”. [The effect was first achieved with a reel-to-reel tape delay by touching and slowing the source reel. The British call their reels “flanges”. “Jetsounds” comes from the ethereal effect similar to that of a jet plane engine sound reflecting off a runway (thus delayed) and recombining with the direct sound, delay varying]. It is feed-forward, and I think would have to be by addition of direct and delayed. Nulls would occur at odd harmonics of the reciprocal of half the delay. A null at frequency 1/11 would require a delay of 5.5 years. Frequencies of 3/11, 5/11, 7/11… would also be nulled. But nature is not THAT neat! Interesting.
I had in mind rather the mass/spring/damper “analogs” of electrical R-L-C. Such have HP (acceleration) integrated to BP (velocity) integrated to LP (displacement) . HP, BP, and LP are easy to visualize. Notch, the sum of LP and HP, is hard to “decouple”. Rube-Goldberg strings and pulleys – I don’t have a drawing.
10
Some process that resonates at 11y and absorbs the power of FactorX. However, it is a resonance it will not deal too well with the variable period and phase of SSN.
00
This is strictly a hypothetical –
Suppose TSI declines and coincident the solar magnetic field increases. And through some mechanism the magnetic field affects Earth ground level TSI. Keeping integrated TSI constant for 11 years.
There is your delay. The next cycle with reduced magnetic field then reflects the TSI fall in the previous cycle.
Not unphysical. And it produces the 14 dB or so of attenuation (not perfect cancellation) that we see in the FT.
Of course it could all be imagination, data error, and modeling error. That is what is interesting about this work. We may learn that it works as David imagines. Or not. Rejecting a hypothesis is just as important to science as accepting it. Either way you learn something.
We have a bias towards positive learning (Eureka! I found it!) which is not bad. But if it rules out negative learning (CO2 does not drive climate) then we are losers. We are then stuck with a theory that is well past its “sell by” date. Which wastes a lot of effort
30
I think the MOST important thing to come out of this line of work is the analysis tools – even more so than the results of the analysis. And the tools are being subject here to strict scrutiny. Good.
20
Didn’t they make some sort of firearm there?
00
More than a sort:
http://en.wikipedia.org/wiki/Ithaca_Gun_Company
But they are now in Ohio.
00
Bernie, it occurs to me that Ithaca is nearly perfectly opposite on the globe from Perth. I wonder how that fits into the equation? 🙂
20
Indeed, Perth is the closest to an antipode to me if one prefers land to ocean. I sell publications (back issues mostly) and most order to Australia are to the Melbourne and Sidney areas. (One order to U. Melbourne, Victoria went to Victoria BC, Canada first!) I was watching for Perth, and finally got an order there a year or two ago.
Ithaca Gun made what were supposedly the world’s best shotguns. Long gone, the site is now apartments, but still called “Gun Hill”.
20
From the ACRIM web-site the contiguous satellite TSI database from 1978 to present is described as being “comprised of the observations from 7 independent experiments”. The site further addresses the issue of matching these experiments in an absolute sense, giving an approximate error of 1 ppt (one part in one thousand). This correspond to 0.1% or +/- 1.36 W/m^2.
The variation within a solar cycle, such as the current drop from 2003 to 2008 being seen as evidence for future cooling, is on the order of +/- 0.5 W/m^2.
Given that the absolute error in the inter-calibration of satellite TSI experiments is currently two times higher than the total change within a solar cycle, how can you analyze amplitude across multiple solar cycles with such assumed accuracy?
http://www.acrim.com
10
Steve,
Accuracy is not the same as precision, or resolution.
Accuracy is how you compare one instrument to another.
Precision tells you the smallest detectable change.
Resolution tells you how many extra bits above the precision you have. You need extra resolution above the precision to act as a guard band.
The difference between accuracy and resolution is one of the reasons climate science uses anomalies to find changes in the “global” temperature.
Back in another thread the Great LS got boxed in by that one (Accuracy, Precision, Resolution).
So to answer your question. The TSI “meters” in space are good to about 1 in 1,000 for accuracy. 1 in 100,000 for precision. And one part in 200,000 for resolution. I assume that the guard band being only one bit is due to the “cost” (in energy) of adding more bits.
20
I understand the difference between precision and accuracy. The problem relates to the fact that seven separate experiments make up the TSI data set from 1978 to present. None of the separate experiments is continuous throughout this time period. To compare solar cycle 21, for example, you have to adjust three different experiments, where each one is based on different equipment. Even the ACRIM data sets vary significantly (1, 2, and 3) and there is a 2 year gap from 1990-1992.
From 2003 to present there are two continuously overlapping data sets – ACRIM3 and SORCE/TIM. You can see these time series diverging from one another even when plotted on a coarse scale spanning 14 W/m^2.
You can’t mention resolution and precision without discussing drift and repeatability. The data sets don’t offer the type of absolute accuracy that you need to assimilate them into a single contiguous data set. The scientists who offer up these data sets even make this clear.
Making the claim that TSI fell by 0.25 W/m^2 is therefore incorrect. You can run noise through a 24-bit A/D converter to obtain a very high resolution measurement. This does not mean it is no longer noise. It is very precise noise, measured to a high resolution.
As the scientists themselves are saying (at ACRIM)…
“A carefully implemented redundant, overlap strategy should therefore be capable of producing a climate timescale (decades to centuries and longer) TSI record with useful traceability for assessing climate response to TSI variation.”
You need redundancy and overlap to achieve a traceable 5 ppm anomaly, which presently is not available with previous TSI data sets.
00
Steve,
I concur! Given the imputed accuracy the overlap is horrible. So obviously the accuracy is not as claimed. I first got a taste of that at Wille Soon’s presentation. See video here:
http://www.ustream.tv/recorded/49735731
00
Let me add that what you are pointing out is that neither the accuracy nor the precision is as claimed. If we did know the real precision it may be that in fact there was a drop in TSI IFF (If and Only IF) confounding factors like instrument drift, noise increases, etc are accounted for.
Measuring physical quantities is hard. Frequency is easier – until you start going for numbers like 1 part in 10^16. Where small changes in gravitation can disturb you.
00
Excuse an old computer programmer for asking but how does a difference of 1 in 100,000 between precision and resolution work out to be 1 bit? You don’t get from 100,000 to 200,000 with 1 bit in any binary hardware I’ve ever seen.
So what am I misunderstanding about what you said?
00
I know you may be a little slow but 100,000 * 2^1 = 200,000. 2^1 looks like 1 bit to me.
200,000 = 100,000 (1 ASR )
OK. Let me give it to you in Hex. 100,000 = 186A0h
200,000 = 30D40h
Now let us try binary 100,000 = 11000011010100000b
200,000 = 110000110101000000b
Is it clear now?
10
Well – the formatting is screwed up. But I trust you can figure it out.
00
100,000 = 011000011010100000b
200,000 = 110000110101000000b
00
Once more.
Just.
Because
100,000 = 011000011010100000b
200,000 = 110000110101000000b
10
Thanks.
I see what I missed. It probably is because I’ve spent a lot of years in a world where the available precision is fixed by hardware and my job was to consider what word size would hold the largest value required to solve the problem. Accuracy was always the engineer’s problem and I had only what some sensor could give me to work with, usually an A/D converter. We always displayed data graphically and in text form — text to two decimal places regardless of anything else and users knew our performance spec and made their own judgments.
The overkill wasn’t necessary. 🙂
10
Excellent! I do hardware and software and it gets complicated. On top of that with this problem we have no idea about the internals because they are not given. We are told the system is designed around an FFT and a bridge balancer. We have a thermistor in the bridge and have no specs on it. What is the resistance? What is the sensitivity at 31°C? What is the noise in the opamp? What gain is used?
And worst of all how much do the thermistors drift over time in a radiation environment? How about the opamp gain and offset?
In addition I would have added a platinum resistance thermometer to the circuit as a sanity check. It is not as sensitive as a thermistor but it is way more stable.
I’d like to talk to the designers. It looks to me like they have made more assumptions than are warranted.
On top of that Willie Soon makes a very good point. With a small hole, variations in hole size after calibration due to ? can shift the calibration. And they also make the instrument more sensitive to pointing errors. But a bigger hole means more power is required for the apparatus. 68 mW is not a lot to read out to 1 part in 100,000. That means that you are looking for changes on the order of .68 microwatts. Maybe in a lab after calibration. After a rocket ride and some years in space?
I went over this in another thread with the Great LS. He got exactly none of it. He flipped me off with something like “read and learn”. I can’t figure out why he has any fans at all. The ignoratti.
I don’t spend as much time at WUWT as I used to. After a run in with WE there and LS here I much prefer the company of engineers. On the whole a a much more grounded in reality lot and way more willing to admit mistakes. Despite our egos. It is part of the territory we live in.
If I was to put it in a sentence – They want to look good. Engineers have to be good.
BTW have you been following the GM switch recall saga? I would have strung up the engineer who changed the switch design without changing the part number by the balls. And then have him drawn and quartered. Pour encourager les autres.
00
I’ve been through some of the same “discussions” and frankly, misgivings about measuring RF at uV input levels, wondering if we really could keep the amplification calibrated (not a simple thing) and then measure it with -5 to +5 V A/D converters and come out within our spec of + – 2.5 dB. And of course, no device is going to measure uV stuff, so you need to do quite bit of amplification.
In the end we could do it to much closer tolerance than the + – 2.5 dB spec. But there was a lot of trial and error along the way.
At lower frequencies I used the FFT, about 10 Hz – 9 kHz. But you have the same problem, amplification and keeping it calibrated. If it’s calibrated right the math to get from peak volts out of the FFT to dB (say, dBuV) is child’s play.
———————————
I haven’t owned a GM car for so long I can’t even remember how to spell it ( ;-)). But my wife drives one and I don’t like it at all and I avoid it like the plague. So I don’t follow their foibles. But there is a similar problem when you modify software and don’t change its version as you no doubt know. Before I retired I had become absolutely convinced that I was the only one in the company who understands the importance of being able to tell every version from every other version. Their latest product, one to which I contributed a lot of work, still has no discernable version number by which I could tell which version I’m running.
And no, I won’t say what the product actually is or the company name.
10
I’m doing a small start up with a gang of freelancers. Sharp guys but talk about herding cats.
http://spacetimepro.blogspot.com/
I couldn’t get proper version control on the software until it started causing internal problems. Now everything gets its own number. I use name, date, and time for my boards.
10
re RH meter: nice little project.
Yeah, that would be about right for Forth. You add an extra digit or a decimal point to your output format and it becomes difficult to understand. 😉
00
Greg
July 23, 2014 at 4:59 pm
I made it simple so people not previously exposed to Forth would have an easy time learning it. For the advanced user things like */ are not a problem.
On on top of that compare with “C” and printf and casting and the rest of the abominations.
00
I’m not a great fan of C , certainly not for writing useland programs.
Most of the security issues and constant need for patching and updates is because of the lack of safeguards in the compiler. That may make sense for writing operating systems where coders are generally highly competent and require fastest executable code.
Most other stuff would be better written in a language like Turbo Pascal that allows range checking, stack over-flow and strings that can’t be crafted to dump on surrounding code or stack space.
Part of that use of inappropriate language choice comes from thinking that an internet browser is part of an “operating system” which needs “windows”.
I had a Forth compiler back in 1980-ish. I found it a fascinating intellectual challenge but totally unproductive. Probably great for tight code requirements of programming cruise missiles, less so for thermometers.
00
Probably great for tight code requirements of programming cruise missiles, less so for thermometers.
How about Argo buoys?
=====
http://www.forth.org/successes.html
Argos’s ensemble of sonar, lights and cameras was orchestrated by an array of computers that each programmed in a different computer language. The computer on the unmanned Argo itself was programmed in Forth, a concise but versatile language originally designed to regulate movement of telescopes and also used to control devices and processes ranging from heart monitors to special-effects video cameras. The computer on the Knorr was programmed in C, a powerful but rather cryptic language capable of precisely specifying computer operations. The telemetry system at either end of the finger thick coax cable connecting the vessels, which in effect enabled their computers to talk to each other, was programmed in a third, rudimentary tongue known as assembly language.
Forth was the only high-level language that could be used on the submersible Argo’s computer.
=============
Why would that be? Well C has the notorious problem of code bloat. Which means you need a “bigger” chip. Which means more power.
I believe Argo buoys have something to do with temperature.
00
I agree with the bloat issue, you seem to miss the bit where I explained I’m not a fan of C for all uses.
I did build an embedded system using my own “Linux from scratch” ( ie using C ), ironically that was also to measure temperature. I did that to have a full OS plus web server, wifi and SVG graphics output.
Forth.org calling C “cryptic” , that’s funny.
Don’t get me wrong, Forth produces really compact code with minimum overhead. That’s why I was interested in it in 1980.
That’s probably about when they started the design of ARGO, too.
There clearly are extreme cases where some hardware requirement makes the pain of Forth worthwhile.
I see the ARGO comment is referenced with the followed:
“Exerpted from: The Tortuous Path of Early Programming”
😉
00
Forth on Arduino? Hmm.
I have a half built Arduino data logger and the Arduino C libs are pretty … poor.
May give it a look.
All rather OT here so I’ll drop it.
00
Greg
July 23, 2014 at 10:24 pm
Cryptic? Only if you write it that way.
I had a team of 3 well disciplined (by me) Forth programmers. We consistently beat a team of 30 C programmers. We got the job done in 1 month. They were still struggling after 6 months. This went on for 2 years. The government inspector who looked at our code said it was the best written code (in any language) that he had seen in 3 years. (the project was a government R&D shoot out for a military radio)
So why is C used by business? Because you can apply 30 mediocre programmers to the job. Why should Forth be used? – you only need 3 good programmers. Even if you pay them double you get a 30X cost advantage. Not to mention the time saved. Which multiples the cost advantage. Overhead, time to market, and all that.
Forth is a multiplier. It makes good programmers better. It makes bad programmers worse.
I’m not trying to convince you to use Forth. But I am loathe to let misconceptions pass without an alternate view.
00
A very interesting conversation. But I wonder about, “It makes good programmers better. It makes bad programmers worse.” In my experience the good ones can turn in the desired results with any language and the bad ones can’t do it with even the “best” language. A very large project I worked on that was done in FORTRAN comes to mind.
I’ve met only one language that actually worked against the programmer to the point where it has been dropped in favor of, among others, C++ and that’s the Ada programming language designed for the U.S. Department of Defense (DOD) in the 1980s — a case of complete overkill. I did one simple Ada program and gave up on it. I was hours doing jobs that should take 5 or 10 minutes.
For all of the 17 years I worked for the company I just retired from I had everything I did under source control and could go back to the state of any project at the end of any given day. And sometimes to the state at several points in a single day if they represented significant development milestones. Microsoft compilers make version control easier by providing a standard mechanism you can use or you can adopt your own method. And source control can identify each version provided you take the time to label things correctly. Source version control was a lesson I brought with me to the job from long prior experience.
Everything was C++ by the way. And that was dictated to me when I started. I understand the complaints above about the problems with C, all of which are also still in C++ and I suppose this will sound like bragging but I never had significant trouble remembering the difference between = and ==, which is the worst of it. One will not work where you need the other. And until recently compilers didn’t bother to look for suspicious usage and give a warning. But in almost all circumstances either one will compile without error if misused — real bad language design, that. But it’s being able to cope with such things that separates the good from the not so good.
That a small group can beat a team of 30 to the finish line isn’t surprising. When you have so many on a project the division of labor and communication become the limiting factors and it takes a lot of sharp management attention to get it right. It almost doesn’t depend on the programmers at all compared with the management burden. In the days when I did DOD work I met only one top management team that got all the engineers and programmers in sync with each other and kept the project on time and in budget. But there was literally one manager for every 4 – 6 programmers. That’s a lot of management overhead.
You’re fortunate if you can work alone or with only one or two others. But then you lose the extra sets of eyes during testing that can probably find bugs you might miss.
Software, for all the advances toward making it a science, remains largely an art that some are always going to be better at than others regardless of the language.
00
Roy Hogue
July 24, 2014 at 1:59 am
You should look into how Forth handles objects. Very simple. It was in fact one of the first object oriented languages. Well before anything in the C family.
BTW Forth encourages writing you application in small well tested fragments. C because of its overhead (stack thrash) does not.
00
Let me add further that not all bits in a digital to analog converter (DAC) are significant. There is something called the effective number of bits (ENOB) and it can be fractional. Numbers like 15.8 bits or 13.2 bits etc. are not uncommon.
00
For instance
15.8 bits = 57,052.4
16.61 bits ~= 100,000
00
For 100,000 precision the ENOB is nearly equal to 16.609640474436811739351597147447
Which is probably close enough for engineering work.
00
I understand that general problem. But my work was always A/D, not D/A. That presents a set of problems too. But again, the engineer’s worry. not mine.
10
“That graph of the phase of your transfer function matches mine, so that pretty much seals it at your end. You appear to be analyzing the correct notch filter.
Your remark that “might have meant the DFT and not the FT” was an important clue. I had used DFT/FFT’s to get the spectrum of the step function. The DFT implicitly assumes a time series repeats to infinity — because the DFT uses only frequencies whose periods exactly divide into the length of the time series. The response I was calculating was therefore of a series of rectangular pulses, even though I was calculating the responses of a times series that was all zeros in the first half and all ones in the second half (so it looked like a step function). When I check the spectrum of what I thought was the step function, I now see that it has only the first, third, fifth etc harmonics — just like the Fourier series of a train of rectangular pulses. The amplitudes have the 1/f dependence like the spectrum of a step function, but the even-numbered harmonics are missing — but they are not missing in the spectrum of a step function.
So that finally resolves the discrepancy. My response was to a train of rectangular pulses and was incorrect. Yours was to a step function, and presumably is correct (it makes sense intuitively, but I haven’t checked it numerically yet).
Bernie, thanks for helping me find the bug, and thank you for your persistence. Well done! I am in your debt. I asked many people to check it, but you are the only one to find the problem. It is good to get this sorted out. (One other person calculated a response in Mathematica but the response seemed to come from infinity near t=0 as I recall, so something was wrong there.)
Everything else about the method worked, so my usual checks didn’t find any problems. The low pass filter and delay work just as well on a train of pulses as on a step function, so they appeared ok. Just changing the length of the time series usually exposes problems like this, because extending the time-series should not make any difference, so it does if there is a problem. In this case I had pushed it out from its usual 200 years to 20,000 years, and it made no difference. Changing the rate at which the function was sampled also made no difference. So it all seemed to check out numerically. It turns out the spectrum of a series of pulses is fundamentally different to a step function, so I was just consistently getting the wrong answer.”
once again. share your code used to create the model and people will not have to reverse engineer your approach from words.
its simple.
412
Mosher said:
“once again. share your code used to create the model and people will not have to reverse engineer your approach from words.
its simple.”
Likely you are correct in a general case. But to clarify, HERE having or not having the code made no difference. And as I suggested above, approaching the same problem with an alternative set of resources (tools) has considerable merit, in my view.
130
having the code would have doubtless made your job easier
having the code would have allowed MORE EYES on the problem
Evans made a claim. that he built a model using certain steps.
that claim can be assesed easily by having him share the code.
nothing is gained from keeping it secret
there is the possible of finding errors if he releases it.
we demanded Manns code.
Evans should release his code
we should not have to ask
and you should not have to defend it.
There is no practical reason not to release it.
there is no scientific reason not to release it
there is no reason to keep it secret other than by releasing it people may find something wrong with it
215
Stephen,
It was released. It was released on the 8th July. All of the release details are here;
http://joannenova.com.au/2014/07/big-news-ix-the-model/
You are creating a strawman that has no validity what so ever.
90
Mosher said:
“having the code would have doubtless made your job easier”
Doubtless? I already SAID it would have made NO difference in THIS case.
He also said:
“having the code would have allowed MORE EYES on the problem”
Now – that would seem to be correct.
60
how do you know it would have made no difference without testing it.
in any case it would not have slowed you down and would have helped others.
at some point you just need to get out of denial
More importantly you had many many folks here arguing that model construction was important
that the models PREDICTIONS were important.
And so they Slagged those of us ( willis and me ) who wanted the code for model construction.
if you had any principles you would support us in our request and stop defending the indefensible
02
Steven Mosher said a number of things July 27, 2014 at 11:45 am:
SM: “how do you know it would have made no difference without testing it.”
BH: This is the THIRD time I have to tell you it would not have made any difference. This was an engineering issue. As one point, David started asking me relevant questions and I knew he had heard the “ominous ring of truth” and would find his own mistake. (In unfamiliar code, it would have taken me forever. That is, I wouldn’t have even tried.)
SM: “in any case it would not have slowed you down and would have helped others.”
BH: So? I said pretty much that too.
SM: “at some point you just need to get out of denial”
BH: Denial about what ?!? About what actually happened? That’s history and all documented here. It was engineer talk – simple, but esoteric. And – somewhat peripheral.
SM: “More importantly you had many many folks here arguing that model construction was important that the models PREDICTIONS were important. And so they Slagged those of us ( willis and me ) who wanted the code for model construction.”
BH: I’m sure no one will suggest that you did not make YOUR position clear.
SM: “if you had any principles you would support us in our request and stop defending the indefensible”
BH: Your remark is inappropriate and embarrassing, borderline boorish. I am wondering how long I might wait for an apology. A long time I suspect.
21
I can’ help mentioning this because I’m not sure all your readers will know the math involved.
The DFT (Discrete FT) and the FFT (Fast FT) are the same thing and produce the same output given the same input. So DFT and FFT mean the same thing except the speed difference can be rather spectacular for a transform of any size. I’ve done 16,384 point transforms and the FFT is the only practical way, even on Intel’s fastest CPUs.
50
As previously pointed out, a relaxation response if physically more meaningful that a notch and hence easier to justify.
Here is a 20y relaxation which removes most of the 11y signal without even the need for an extra low-pass filter and provides the 10y lag I showed is found from cross-correlation of SSN and SST.
http://climategrog.wordpress.com/?attachment_id=998
It also avoids the need for the highly questionable nuclear fudge factor to remove the 1960’s bump.
To make an attempt at a climate model based on SSN it will be necessary to account for the increased SW insolation resulting from the major stratospheric eruptions:
http://climategrog.wordpress.com/?attachment_id=902
http://climategrog.wordpress.com/?attachment_id=955
The classically accepted volcanic cooling is only a transitory effect. The subsequent warming is not.
Despite this effect being clearly visible in TLS and detectable in ERBE radiation measurements, it seems to have escaped the notice of mainstream climatology so far.
20
Thank-you Dr Evans and Bernie Hutchins.
This idea of open review science is a breath of fresh air compared to the closed, just give us your money and we’ll give you your results, methods that infects most of climate science.
Well done.
50
Dr Evans just to get this straight, your method effectively shows that incoming solar energy is now ‘smeared’ across an ~11 year process, and not the instant energy in equals energy out idea that prevails today.
That is to say the effects of solar energy impacts from ~11 years ago is some of what is dissipated from the planet now and has been for ~10 years or so?
Or am I thinking wrongly?
20
The climate system has thermal inertia that smooths out the effects of bumps in the input radiation (modeled as a low pass filter, with time constant around 5 years).
Above and beyond that, we found that the solar radiation appears to have a delayed effect, as if it had a much larger effect than its direct, immediate effect, but that this more powerful effect occurs about 11 years after the change in solar radiation.
No force can actually delay itself for 11 years (what, hangs in space waiting in line to check in to the climate system on Earth?), so this must be a different force. It is synchronized with solar radiation, so it almost certainly comes from the Sun. We call it “force X” for now because we don’t know what it is, though we know some of its properties.
We know force X acts on the Earth’s albedo, so it can have quite a small amount of energy. Like a tap on a firehose, force X might be a small force but control the much larger inflow (or not) of solar radiation into the climate system (by controlling the amount of solar radiation that is reflected back out to space without entering the climate system, usually about 30%).
Force X, for instance, might be EUV and FUV (highly energetic ultraviolet) that affects ozone in the stratosphere that effects how far the jetstreams are from the equator. Or any of a myriad of electrical or magnetic effects that effect clouds. Not cosmic rays though, because the synchronization is wrong.
The 11 year delay might arise because force X lags by 180 degrees of the full solar cycle of the Sun (which is about 22 years). Perhaps there is a resonance in the Sun, perhaps due to the rhythmic tugs of the Jovian planets for instance. The easiest way to make a notch filter is to build a resonance — resonance lowers some resistance massively at the resonant frequency, so it increases or decreases various quantities around the structure creating resonance, some with peaks and some with notches.
110
Many thanks.
I will think on it, which may take a while.
BTW did you ever see tchannon’s http://daedalearth.wordpress.com/2014/06/27/an-11-year-solar-signal-in-the-atmosphere/ and the paper it refers too? I’ve just read it and it is very close to your work.
40
That’s an interesting graph. Suggests that even by their method of detection, there is an 11 year signal in the troposphere and stratosphere but it gets weaker near the surface.
Richard C (NZ) has dug up numerous papers that lead towards the same conclusion — the 11 year warming signal can be found in many places in the climate system, but not, apparently, in the surface temperatures.
So how does the notching mechanism apply only to the surface? This might be an important clue.
50
The only place I see where troposphere/tropopause/ozone and the lower atmosphere actively and very dynamically interact is the jet streams. This is also where the lower atmospheric cells interact both in pressure and thermally.
Umm, maybe a useful place for my next searchs?
20
Gratifying to note an increasing focus on the jet streams.
I’ve been drawing attention to them since 2007.
00
From Tim Channon’s post:
More damned “trend” fitting.
If you take a look at temperature of the lower stratosphere without trying to draw straight ( or ‘non-linear’) lines through it, it is very clear that the cooling is a result of atmospheric changes caused by major stratospheric eruptions.
http://climategrog.wordpress.com/?attachment_id=902
This is most decidedly NOT “consistent with” AGW.
20
A typical low-pass filter with a symmetrical kernel will spread changes both backwards and forwards in time. This is not physically meaningful for the kind of process you are trying to describe.
A better description of the thermal inertia would be given by a relaxation response, which as I’ve said also has low=pass properties.
10
Dr.David Evans
“No force can actually delay itself for 11 years (what, hangs in space waiting in line to check in to the climate system on Earth?), so this must be a different force. It is synchronized with solar radiation, so it almost certainly comes from the Sun. We call it “force X” for now because we don’t know what it is, though we know some of its properties.”
Look at the inertia of ozone. Both growth and decline occurs with a clear delay and ionization (UV and GCR) is highly dependent on the strength of solar flares.
http://www.esrl.noaa.gov/gmd/odgi/odgi_fig3.png
It can be seen that the increase in ozone in cycles of high lasted exactly 11 years from 1990 to 2001. Below 1990 levels in the mid-latitudes fell exactly after 11 years.
The slower decline in high latitudes can be explained by the increased ionization GCR.
50
That’s interesting. Might not account for the synchronization of force X to TSI that presumably causes the notching, but it is definitely a possibility worth keeping in mind.
40
Dr.David Evans
Please see the temperature drop of ozone in the stratosphere over the polar circle. With time is transferred it into the lower layers of the atmosphere.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_ANOM_ALL_SH_2014.gif
20
The decrease in UV is very visible above the equator.
http://www.cpc.ncep.noaa.gov/products/stratosphere/strat-trop/gif_files/time_pres_TEMP_ANOM_ALL_EQ_2014.gif
20
It appears that ozone is highly resistant to short-term spikes in solar activity, although the UV jumps in accordance with F10,7.
http://www.swpc.noaa.gov/SolarCycle/f10.gif
20
Very quickly followed by reaction of ozone in regions of Earth’s magnetic poles on the growth of the GCR. Approximately one week is sufficient to cause an increase in winter ozone.
http://www.cpc.ncep.noaa.gov/products/intraseasonal/temp10anim.shtml
20
You may find this interesting.
http://cosmosmagazine.com/news/earths-atmosphere-breathes-and-out/
Solar effect on the atmoshere over short time scales.
20
And as from last year something unpredicted/unanticipated has happened in that the upper atmoshere was found to be shrinking during the solar maximum.
http://news.discovery.com/earth/earth-atmosphere-shrinking.htm
20
from that link
Phil Wilkinson of the Ionospheric Prediction Service with the Australian Bureau of Meteorology says it highlights something is going on that science doesn’t understand.
imagine that eh, i would suggest there are many things going science does not understand,climate science in particular.
30
ops,missed the “on” ,should read “many things going on”
00
Dr.David Evans
You can use the measurement data F10,7 since 1948 to determine changes in the UV.
“Fortunately, changes in the solar spectrum in the range responsible for photoionization are correlated with the radio radiation of the Sun in the field decymetrowym which is accurately measured by telescopes on Earth since 1948.”
http://iopscience.iop.org/0067-0049/210/1/12/article
20
Can see that the F10 7 may be useful to determine the temperature changes.
http://oi61.tinypic.com/c10dj.jpg
http://onlinelibrary.wiley.com/doi/10.1029/2004JD004873/abstract;jsessionid=3C3CFBB52CCA4916FA750D7716530598.f03t01
20
Dr. Evans would you consider using the Laplace Transform method of identifying the impulse response (we discussed earlier). And then the convolution of the impulse response and the TSI input in the time domain by inverting the Laplace Transform of the impulse response as an alternative method to check your results from Fourier transform analyses?
00
I think the FT approach , while not incorrect in principal is being misapplied and the result misinterpreted.
The problem is that the “output” is not _just_ the result of result of the input and the system transfer function. There is a very significant level of instrument bias, sampling error and straight forward noise.
There are also non solar drivers, such as a 9y lunar variation that is at least as strong as the solar signal at the surface.
I suppose the same logic would apply if Laplace was applied with the same assumptions, though I think it would be a good idea to compare the two.
My relaxation model graph corresponds to a exponential impulse response, 1/s in Laplace terms IIRC.
http://climategrog.wordpress.com/?attachment_id=998
20
David and Jo have a rather peculiar (to me unscientific) way of getting around the noise problem:
1. Make a known-to-be-false assumption (to be removed later) that there is no noise
2. Deduce a transfer function that can ONLY be valid if the assumption above is correct
3. Remove the known-to-be-false assumption, i.e. add in the noise of volcanoes, nuclear tests, CO2 etc
That noise has many sharp transitions, i.e. high frequency content.
Claiming to see relatively high frequency TSI-driven signal (the high frequency edge of the notch),
has zero credibility for me, especially as David fails to respond to the issue.
00
Leonard – The Laplace transform is more general than required, and is difficult to compute numerically. All the climate variables stay bounded (finite) as time goes infinite, so the FT is applicable. In the spirit of using the simplest tool available, the FT is preferable.
Despite that, to find the step response of a notch, the DFT or FFT are unsuitable. It would be really nice to find the step response using formulas, to avoid length and possibly nasty numerical estimates. To find the formulas is probably much easier using the Laplace transform, so I’ll try that. Will still have to verify it numerically using an FT, but that can just be done on a few sample notches rather than every slight variation of a notch under optimization.
40
why the focus on the step response instead of the impulse response?
30
Could have used either. In practice I found the step response easier to understand, easier to explain to people new to both concepts, more “intuitive”. Also, no ambiguity over what constitutes “an impulse”.
40
Thanks for the reply.
There is no ambiguity over what a unit impulse or the Dirac fn is. The interest of the impulse response is that you can convolve it with any input TS and get the output.
Isn’t that ultimately what you want to do with SSN, find out the climate response to SSN related forcing and compare it to some kind of surface temperature record?
If your FFT ratio method is finding the correct tx fn, work out the impulse response and convolve with SSN. This should give you the solar component of the surface record. How does it compare in scale and form to the established surface record?
Irrespective of what the actual mechanism is, this should give you some idea how much of the surface record can be modelled as a fn of solar related forcing.
30
Count me as one who can more easily deal with a step than an impulse response. Both cause real world systems, electrical or mechanical to do more or less the same thing, what one engineer I worked with likened to a hangover. There is “ringing” after the initial stimulus. But I never really could say I understood impulse response.
I suppose you could consider a step function to be an impulse. But I think there’s more to impulse generally than a simple step function. Right?
In all the work I did with the FFT we warned users that only continuous input would read out correctly because anything else would be incorrect in frequency spectrum, amplitude or both.
00
I don’t see what is more difficult about an impulse.
Tee up a golf ball tied to a bit of elastic or consider the impact of steel balls in Newtons cradle if you want mechanical examples.
Drop a pebble into a pond ( impulse as far as the surface is concerned ).
An idealised impulse is just two idealised step fns. except that it is the area under that is normalised rather than the height.
It’s like you are imparting a pulse of energy to the system and observing how it rings, rather than applying and maintaining a force and seeing how it adapts.
conceptually it does not seem any more or less complicated. A step may seem more obvious if that is what you were taught and you are familiar with it, that’s all.
The nice thing with the impulse response is that you can convolve it with any input signal to get the output (Provided that you can represent it to a sufficient accuracy in a reasonably short “kernel”).
Convolution is just like a weighted running mean or any FIR filter to calculate. Simple.
10
Oh Greg !
An impulse is MUCH more difficult than a step. We are talking Dirac deltas here – right?
A child turning on a light understands a step. The light was OFF, I threw the switch, and now it is ON.
The Dirac delta is a monster: infinite height and zero width. Unit area. Limits! Engineering and physics students have trouble with these things. Until the Dirac delta is “properly clothed” (inside an integral) we are asking to be misled. And what could sampling with a periodic train of Dirac deltas mean? Is it the same as multiplying by a train of Kronecker deltas? And suppose I propose to measure the impulse response in the lab. How short is short enough. Sure my questions have reasonable answers, but I don’t think they come easily.
The worse thing is that if you use impulses instead of steps, you will distract many many people who could otherwise basically follow a general scientific discussion.
00
Yeah, well you have assume some basic level of ability. You are not likely to explain climate variation if you limit your discussion to what a child or freshman student can understand.
Since the science ability of most climatologists seems to be limited to putting a pot of water over flame and sticking a thermometer into it, you’ll probably loose most of them too.
I thought David’s initiative here was to apply some engineering methods to problem solving and see whether he could produce an alternative model.
If you set the bar low enough for everyone ( even climatologists ) to follow you’ll end up back at CO2 plus “noise” because you have not tools capable of studying anything more subtle than a steady rise.
I was aiming my comments assuming at least graduate level training in engineering or a hard science.
If you can’t assume a basic understanding of calculus, ODE’s and frequency analysis, and limit the discussion to fitting ‘linear trends’ to running means, you are not going to get any further than AGW.
00
Greg – nothing I strongly disagree with!
But you said:
“You are not likely to explain climate variation if you limit your discussion to what a child or freshman student can understand.”
Is the “contrapositive” to this true?
00
No idea what a contrapositive is. If it means the opposite, there’s no guarantee that an engineering approach will ‘solve’ the fundamental questions of how climate works,
Maybe it’s too chaotic and rather vague statistical results are all that can be drawn.
Since the last 30y have been more or less wasted attempting to prove a foregone conclusion there are a lot basic inputs that have still not even been determined.
As far as I am aware no one has recognised, let alone explained the effect of volcanoes beyond the initial cooling effect. Here it is clearly seen in lower stratosphere:
http://climategrog.wordpress.com/?attachment_id=902
Aerosol forcing is being deliberately underestimed to make models work with high sensitivity
http://climategrog.wordpress.com/?attachment_id=884
There is a clear lunar influence on globally averaged SST ( which is often confused with a possible solar infulence ).
http://climategrog.wordpress.com/?attachment_id=981
This was touched on by Keeling and Whorf in 1996 but it was already becoming taboo and has been roundly ignored since.
Lack of recognition of the basic inputs severely limits the ability of a more rigorous engineering approach.
10
Greg
July 25, 2014 at 5:22 pm
Volcanic activity seems to correlate with low solar activity.
http://www.iceagenow.com/Volcanic_activity_increasing_worldwide.htm
Why do I keep mentioning volcanoes?
Because ice ages correlate with huge increases in volcanic activity.
http://www.debate.org/photos/albums/1/2/1258/32577-1258-uvv5z-a.jpg
00
Staying finite is not a sufficient condition. The input needs to have a stationary mean, which is clearly not the case for SST over the period of study.
Much of the FT content will be recreating the ramp as a repetitive form. A similar problem to your step not being a step but a square wave.
The only clear result you have from the FT analysis is that the 11y periodicity is lacking in the output.
I don’t see anything in what you present that distinguishes this result from low-pass filter or no SSN signal at all in the output.
From other evidence ( that I’ve presented already ) I’d say there is evidence of a small 22y signal, to I’d tend to favour low-pass.
There’s also a 10y lag in cross correlation of a roughly 70 year periodicity that is significant against red noise. The usual proviso that correlation does not prove causation should be noted there.
since your ‘input’ signal (SSN) has most of its energy concentrated around 11y , unlike a step or impulse, it is not a good test signal from which to infer the transfer fn of the system.
Your interpretation of FT is erroneous. I and others have raised this several times and IFAIK you have not replied or addressed that issue yet.
20
“I don’t see anything in what you present that distinguishes this result from low-pass filter or no SSN signal at all in the output.”
“There’s also a 10y lag in cross correlation of a roughly 70 year periodicity that is significant against red noise. The usual proviso that correlation does not prove causation should be noted there.
since your ‘input’ signal (SSN) has most of its energy concentrated around 11y , unlike a step or impulse, it is not a good test signal from which to infer the transfer fn of the system.
Your interpretation of FT is erroneous. I and others have raised this several times and IFAIK you have not replied or addressed that issue yet.”
Estimation of impulse response of Earth’s climate system at short time intervals
M. B. Bogdanov
T. Yu. Efremova
A. V. Katrushchenko
Journal of Atmospheric and Solar-Terrestrial Physics (Impact Factor: 1.42). 09/2012; DOI: 10.1016/j.jastp.2012.06.007
ABSTRACT A method is described for restoration of the impulse response h(t) of
the Earth’s climate system (ECS), which is regarded as a time-invariant
linear dynamic system whose input is the change in solar constant, and
output—the global mean surface temperature anomalies. Search for
solution of the ill-posed inverse problem is carried out on a compact
set of non-negative, monotonically non-increasing, convex downward
functions. This suggests that ECS may be a first-order dynamic system or
a set of similar independent subsystems with different time constants.
Results of restoration of h(t) at time intervals up to 100 months show
that it is a rapidly decreasing function, which does not differ from
zero for t>3 months. An estimate of the equivalent time constant
gives the average value of 1.04±0.17 months. The sensitivity of
the ECS to changes in radiative forcing at the top of the atmosphere is
equal to 0.41±0.05 K W-1 m2.
Sea surface temperature variability in the
southwest tropical Pacific since AD
1649
K. L. DeLong
T. M. Quinn
F. W. Taylor
Ke Lin and Chuan-Chou Shen
NATURE CLIMATE CHANGE VOL 2
NOVEMBER 2012
A prime focus of research is differentiating the contributions
of natural climate variability from those that are anthropogeni-
cally forced, especially as it relates to climate prediction
1–3. The short length of instrumental records, particularly from the
South Pacific, hampers this research, specifically for investi-
gations of decadal to centennial scale variability1,4. Here we
present a sea surface temperature (SST) reconstruction de-
rived from highly reproducible records of strontium-to-calcium
ratios (Sr/Ca) in corals from New Caledonia to investigate
natural SST variability in the southwest tropical Pacific from
AD 1649–1999. Our results reveal periods of warmer and
colder temperatures of the order of decades during the Little
Ice Age that do not correspond to long-term variations in
solar irradiance or the 11-year sunspot cycle. We suggest that
solar variability does not explain decadal to centennial scale
SST variability in reconstructions from the southwest tropical
Pacific. Our SST reconstruction covaries with the Southern
Hemisphere Pacific decadal oscillation5 and the South Pacific
decadal oscillation6, from which SST anomalies in the south-
west Pacific are linked to precipitation anomalies in the western
tropical Pacific6. We find that decadal scale SST variability has
changed in strength and periodicity after 1893, suggesting a
shift in natural variability for this location.
00
Thanks, Steve, that looks like interesting stuff. I may comment later when I’ve had time to read it ( if I manage to find a copy that’s not paywalled ).
00
Thank you Dr. Evans. In terms of discrete or continuous, the z transform is (from Wikipedia)
“In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex frequency domain representation. It can be considered as a discrete-time equivalent of the Laplace transform. This similarity is explored in the theory of time scale calculus”.
I do not know if this is applicable in your case. But I am happy to see you say you will try the Laplace transform. I hope it helps in your analyses.
20
The z-Transform (ZT – used for “digital filters”) is actually a “sibling” of the Laplace transform, the LT itself being the “grandfather” of a family of SIX transform (pairs). Three of the six are LT, and its “children” the Fourier Transform (FT) and the Fourier Series (FS). The ZT also has two children, the Discrete Time Fourier Transform (DTFT) and the Discrete Fourier Transform (DFT). The DFT has a “show-off-friend” the Fast Fourier Transform (FFT), but is not a separate transform itself. I recently assembled two “maps” showing the interrelationships of the family of six and posted it here.
http://electronotes.netfirms.com/AN410.pdf
A teaching colleague of mine used to apologize for bringing out yet another transform by saying that it was nor really new, but that we only know a few things, so we dress them up in new clothes and parade them out for the students ;).
40
If we treat the strong solar minimum in 2008 as the solar signal and we take into account the length the previous cycle of 12 years, the effect of this solar minimum will see in 2020. Of course, the temperature drop will be uneven, depending on the thermohaline circulation.
00
ren,
We are starting to see the decline already in the US Midwest. This summer has been unusually cool. That usually portends a very cold winter. Russia has also been seeing similar.
So if we go by your metric 2014 – 12 = 2002 –> which is just about the point David identified (2003) as the sharp drop off.
Habibullo Abdussamatov has identified 2014 as the first year cooling will be identifiable. Last year he was more tentative. This year he is definite. Visit this page
http://www.oarval.org/ClimateChangeBW.htm
And look for this image:
“Figure 1. Variations of both the TSI and solar activity in 1978-2013 and prognoses of these variations to cycles 24-27 until 2045.
The arrow indicates the beginning of the new Little Ice Age epoch after the maximum of cycle 24.”
00
The temperature at a height of 700 hPa show that the circulation over North America is very similar as it was winter.
http://earth.nullschool.net/#2014/07/29/0300Z/wind/isobaric/700hPa/overlay=temp/orthographic=-103.77,55.66,729
00
Very nice graphic!
00
A word of warning:
These solar model threads are beginning to sound more like Astrology than Science.
Try re-reading your post whilst imagining it being spoken by a fortune teller.
10
Ah. But we are not fortune tellers in the current context. We are misfortune tellers.
A coming little ice age and the death of CO2 as a climate driver.
00
Click Earth. This is an excellent map of the current data. This is not astrology but reality.
http://earth.nullschool.net/#current/wind/isobaric/70hPa/orthographic=223.10,-90.99,365
10
It’s not astrology because it’s history. What has this to do with predictions of “the coming little ice age” ?
00
Winter patterns of circulation in the summer?
00
Bernie Hutchins July 24, 2014 at 2:15 am
“I recently assembled two “maps” showing the interrelationships of the family of six and posted it here.”
http://electronotes.netfirms.com/AN410.pdf
Bernie,
Thank you for your maps. I take it that your DTFT is the same as the set of all David’s MFTs.
Do you or others know if “the models use a true 30 year Gaussian (with no aliasing) or is it a boxcar with a semi-Gaussian top (with sinc aliasing)? Can this be David’s notch at 10 years, of white noise?
00
Here is what we can say for sure.
(1) The DTFT is not new – just unappreciated! Originally (1970) in the earliest days of digital signal processing it was just called Fourier Transform (of a discrete-time signal), causing much confusion. Eventually the CTFT and DTFT distinction was established. Two important notions about the DTFT eventually emerged: (A) it really was just the familiar Fourier Series with the roles of time and frequency reversed; and (B) when we calculated the “frequency response” of a digital filter we were using the DTFT – that’s what it was “good for”. Soon the sampled (in frequency) form of the DTFT, the DFT, emerged, and the facts that the DFT was computable, always, from the given time DATA, and that the DFT has a fast friend (the FFT) took over our attention. So lots of tools already.
(2) If we are talking about any sort of averaging, we are combining data sequences with different delays and have digital filtering effectively going on. Averaging (and similar) is usually FIR (Finite Impulse Response) and tends to have notches (unit circle zeros). The DTFT gives us the frequency responses. But there may be two very DIFFERENT things here: (A) David sees a dip in certain spectra as a notch-like, naturally-occurring effect; and (B) in human-generated smoothing of data series we need to be alert for mathematical artifacts such as peaks and nulls not in the natural data.
It’s never easy.
00
Bernie Hutchins July 31, 2014 at 4:23 am
“Here is what we can say for sure.”
Bernie,
Thank you again for your helpful thoughts.
I was considering some artifact from the haphazard data agglomeration called “Gorebull temperature”.
With that nonsense of temperature, further haphazardly massaged through some unidentified time interval (aperature) filter, to become “Gorebull temperature anomaly”. This anomaly may be some indication of Earth’s internal energy, perhaps, or not, influnced by by insolation and or atmospheric CO2.
I remember the FT artifact between a step function and a single flat pulse being resolved by the FT of a constant from t1 to infinity, that was analytical.
00
Not seeing any direct question here, this may best serve as a platform for a few more general comments about how engineers see “climate science”.
(1) First, engineering is at least an order of magnitude MORE STRAIGHTFORWARD than so-called climate science. Wiggle words won’t get you very far in engineering.
(2) The engineer’s job is to make things actually work. Engineers have a mindset in this regard that starts with the expectations that things are unlikely to be right the first time. Often getting around “bugs” requires an admirable level of careful analysis and intellectual creativity.
(3) Engineers (particularly EEs and MEs) design for stability using negative feedback, and when we see a highly stable system (natural or engineered), we expect to find it contains negative feedback, or at the very least an absence of “tipping point” (as opposed to mere amplifying) positive feedback.
(4) Most of an engineer’s work is not from textbooks or handbooks. One false view of engineering is that engineers just dust off ancient books, find the “correct formula”, plug in numbers, and do arithmetic. To the contrary, engineers need to constantly develop new methods and new tools, paying attention to minute details while always having the big picture firmly to mind.
********************************************************
Here perhaps is the additional opportunity to point out that I have just in the last few days posted a new application note (AN-413) on a bunch of tools and results related to notch filtering. It is here:
http://electronotes.netfirms.com/AN413.pdf
00
Interesting and informative.
trying to adapt your mechanical anology in fig 8 to climate:
red ball ( forcing ) = radiation
displacement (LP) = OHC
velocity (BP) = temp
accel (HP) = dT/dt
Where is the notch ?
00
Don’t think that is correct, but you get the idea, Can you relaste this to surface temp and radiation and say what the notch would represent?
00
Greg – Ahhhh… Quite So. Thanks.
Perhaps what I have illustrated is the difficulty of constructing the mechanical analog of a notch, and perhaps by inference, the difficulty of finding a notch in nature. The notch is the sum of a HP and a LP, which correspond to summing the Acceleration and the Displacement of the mass. That is, we sum things that are two integrals (or derivatives going backward) apart. They would be in different physical units in fact, unless represented by state-variables. I think that fortunately here, we have a spring whose force (and hence the acceleration of the mass) is in terms of an elongation (Hook’s law). Thus we want to sum the displacement and the elongation of the spring (scaled properly and with properly interpreted signs). The displacement of the mass is easy enough, but try to ”measure” the elongation of the spring and add to the displacement. I think I did draw a Rube Goldberg scheme for this many years ago but could not envision it in operation then. Now I wonder if I still have the diagram. Hence my cryptic diagram.
But you get me supposing that summing a natural LP and natural HP may be physically problematic. Summing, for example, meters and meters/sec2! Interesting. Any ideas?
But there is no climate interpretation intended in the mechanical analog.
00
Bernie Hutchins August 3, 2014 at 5:07 am ·
Not seeing any direct question here, this may best serve as a platform for a few more general comments about how engineers see “climate science”.
(1) First, engineering is at least an order of magnitude MORE STRAIGHTFORWARD than so-called climate science. Wiggle words won’t get you very far in engineering.
——————————————————————————-
Engineers like to solve problems. First, Identify “the problem” to be solved, and what tools and resources are required to solve such. The so called climate science seems to be that of creating problems from pre-existing fantasy, Then publish your so derived fantasy problems. Any solution is for the politicians.
Is David still having problems with: “Note that we need to find the step response of not just a notch filter, but a notch combined with a low pass filter and delay in a particular configuration, for which an analytic solution is unlikely–though I’ll have a go.”?
I cannot decide if David wants the “time response” to a step function convolved with all three “filters” or if he wants the frequency response of that convolution,which is the product of all three FT’s all three are individualy analytic, I think. Is not David doing a convolution of the sun spot discrete time series convolved with three filters to reach a predictive “Gorebull temperature” discrete time series? He must syncronize the analytical filters with the two pre-defined discrete time seriesz, in some way that is reasonably efficient. Do you want a spectral or a temporal output? The spectral likely has more information, but some fool will wish to know “what will tomorrow be like”? As you say never easy. Check:
<a href="http://ufdcimages.uflib.ufl.edu/AA/00/01/16/66/00001/Signals.pdf
There are other references: Google:”Fourier transform of constant from t1 to infinity”.
00
David in Fig. 2 of Part 6 has a parallel structure followed by a low-pass. The parallel part has a direct (“immediate”) path combined with a second path that has a notch (and we assume a delay due to the notch – less than he originally thought) and an additional (presumably flat frequency response) delay. [I do not understand “Delay Filter” as being anything other than an additional delay. I’m not sure if this delay will be in his revised model.] Once these are combined (added) the sum goes through a low-pass. All this is straight-forward, at least as a flow-graph.
If there were no notch, the sum of the immediate and delayed is a well-known comb filter which itself has periodic notches (“comb filter”). The notch in series with the delay complicates things. Yet if everything is specified, there is no problem calculating a transfer function (and associated magnitude and phase responses) to the output of the summer. Adding on the low-pass is then essentially trivial – multiplying the low-pass transfer function by the aforementioned parallel result. If we are given the various weights (multipliers) we know the yellow box as a single “animal”. [Any series connection is of course a multiply of transfer functions OR a convolution of time responses.]
Although there are three “filters”, they are NOT in series, but interconnected as described above. You add two and then put this sum in series with the low-pass. The FT is NOT the product of the three, NOR is the step response the convolution of a step with all three (sequentially). So it is not terribly simple, but it is straightforward if everything is given. I see no reason why the overall network can’t be analyzed in pieces or as a whole as the individual prefers.
You ask if we want an answer in terms of frequency or of time. Of course we want both! And by using the FT we can have both. Also some have suggested an impulse response instead of a step. I would suggest that a step is likely to be far more useful. Except as an integrator might be encountered (unlikely and not shown), anything to do with an impulse will disappear by and by. And how long is an impulse for the sun or climate system! Interesting perhaps, but don’t we really want to know what happens if there is, say, a 3% change in a solar parameter that hangs around for at least a few decades? Thus step response would seem the best choice.
00
Bernie Hutchins August 3, 2014 at 12:02 pm · Reply
David in Fig. 2 of Part 6 has a parallel structure followed by a low-pass. The parallel part has a direct (“immediate”) path combined with a second path that has a notch (and we assume a delay due to the notch – less than he originally thought) and an additional (presumably flat frequency response) delay. [I do not understand “Delay Filter” as being anything other than an additional delay. I’m not sure if this delay will be in his revised model.] Once these are combined (added) the sum goes through a low-pass. All this is straight-forward, at least as a flow-graph.
Thank you Bernie,
Does not that small direct and large delay notch part indicate a large energy source way other than a tiny insolation energy for the total internal energy of this Earth that may possibly exhibit as the homogenized Gorebull surface temperature. Do those Sun spots give any good indication of the magnitude and phase of that large energy source?
BH “But first of all, things HAVE to work. If they don’t work, what matter cost or feasibility? I always told my students “First make it work, then make it pretty.””
You must get away from educating and back to learning. My daughter just taught me
that her job “marketing analysis” is the same as engineering. Her job is to decide
If the plastic bottle of crappy shampoo will sell better at Wal-Mart with the purple, or green top. So far looking at her age and salary, she is a has a better guesses than I.
This planet is gifted with folk of many skills. We only need ask, rather than tell!
00
Item 4 needs some more elucidation. The beginning of solving the problem is applying simple math to the question. The real engineering begins with solving the interactions of second, third, fourth, etc. order effects with each other and the problem space.
Einstein put it correctly:
As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. – Albert Einstein
And on top of all that you have to solve the problem in a way that makes economic sense. And the economics is a slippery thing to get a hold of because the relationships are not fixed. And sometimes not obvious. A one megohm resistor may be in a given circuit a “better” small value capacitor than an actual capacitor. Which is to say real components are a mix of effects.
And then you throw in software. That gets very tricky because the “bug” introduced here may evidence itself there and much later. So cause and effect are related but not in obvious ways. And then fixing that “bug” may introduce others.
And it all interacts in ways that are not mathematically tractable.
What I have seen is that companies balk at older engineers because they are not up on the “latest”. And that is a problem. And they cost too much. But the bigger problem is that newbies make mistakes that the OFs would never consider. Aerospace does a pretty good job in that regard with their preference for seasoned engineers.
And did I mention the interaction with process? How does the production factory work? What test eqpt will be required?
00
I don’t disagree.
I just wanted to remind folks that engineers invent their own tools on the fly. And certainly we expect the unexpected. When someone comes in saying they have a “quick question” or “simple question” you can be pretty sure they don’t.
But first of all, things HAVE to work. If they don’t work, what matter cost or feasibility? I always told my students “First make it work, then make it pretty.”
00
There may still be a few non-engineers reading here. It was for them.
00
Bernie: “Perhaps what I have illustrated is the difficulty of constructing the mechanical analog of a notch, and perhaps by inference, the difficulty of finding a notch in nature. The notch is the sum of a HP and a LP, which correspond to summing the Acceleration and the Displacement of the mass. That is, we sum things that are two integrals (or derivatives going backward) apart. They would be in different physical units”
That looks like a problem with the “state variable” approach. What about returning LCR ? Is there a possibility of an inductive type response in climate? Maybe.
Regarding OHC or temperature as the “potential” or analogue of voltage and radiative flux as current:
An inductor is a negative feedback in potential acting to maintain current flow.
The tropics is where most of the heat input to the climate system occurs. So tropical climate is probably where to look.
Tropical storms are a negative feedback on SST. Locally non-linear but possibly linear or non-linear on a regional scale ( not a material difference here ).
A drop in radiative input ( like a major stratrospheric eruption ) causes a rapid drop in SST which causes a later onset of storms and/or a drop in spacial density of storms produces an increase in radiative flux into the tropics.
Unless I’m mistaken that is analogous to the inductor.
Capacitive storage is obvious and does not need further explanation.
Resistance: since most lossy mechanisms end up as heat, which is our potential we cannot use them as resistance. Also radiation reflected back to space is part of the feedback giving L , so that must be avoided too.
Heat lost to extra-tropical regions by oceanic mixing ( major gyres ) may provide R. Also possibly atmospheric circulation, thought care is needed in using that as R since it is probably laden with feedbacks.
Now the LCR idea is linked to the idea of an industrial PID (proportional-integral-differential) controller, which links into what someone suggested a year or two back in relation to tropical response to volcanoes.
I demonstrated by overlaying the last six major eruptions that tropical climate seems to maintain its degree.day product: the I in PID controller. Unfortunately I can’t recall who suggested the PID analogy but it struck me as a good one.
http://climategrog.wordpress.com/?attachment_id=285
Follow links, there are four graphs showing tropical/ex-tropical response for NH,SH of land and sea.
So, unless there’s a flaw in my logic, there may be some physical evidence of the presence of an ‘inductive’ response in tropical climate which leaves to door open for an LCR notch filter.
That also brings us back to a serial network, convolution, products of freq responses, which makes the whole explanation less “convoluted” and more credible.
It would also avoid the need for a second, external driver with a convenient phase lag.
I still find a simple relaxation response far more parsemonious explanation of the long term correlation than the notch idea.
http://climategrog.wordpress.com/?attachment_id=981
00
Thanks for your comments Greg –
First let me emphasize that my concerns here are mostly engineering and a bit about mechanical analogs and whether or not a notch readily occurs in nature. I am not advocating or defending any particular modeling. I don’t have that big picture.
Secondly, I believe the state-variable approach is intended to AVOID the issue of physical unit. The “variables defining the state” all just become numbers, as in the analog computer that is the basis of this approach.
Your point about going back to RLC is good. It is fairly easy to let ourselves think we see an inductor even where there is no “coil of wire” evident. Since for an inductor, v=L(di/dt) = L(d2q/dt2), we have the simplest 2nd-order differential equation, just as Newton’s second law is F = M(d2x/dt2). In the case of the mass-spring-damper system we further had F=-kx (Hook’s Law). So suppose we have an ion accelerated in an electric field established by a voltage v. The moving ion looks like a current controlled by an inductor.
So it seems possible that there could be a flow of “something” (like charge flowing as a current) through two media (like impedances) such that one produced a result (like voltage) proportional to the time derivative with this result added to a second result proportional to the integral. It seems kind of like letting mathematics push the real world around too much? I need to read Wigner and Hamming (unreasonable effectiveness of math) again ;).
00
An analogue computer is fine for doing calculations, just like a digital one. That lends no validity to the model that is being calculated. As you say, number without physical dimensions.
I think that if there is a notch effect in climate it has to be physical quantities that form a demensionally valid equation. That would seem to imply LCR or delay line.
Heat transport by surface ocean currents could provide a delay but surface currents would be subject to all sorts of futher modification. Water sinking in the arctic and popping up 5.5 years later in anti-phase in sufficient quantities to provide a global notch…. pushing the bounds of credibility a bit.
David uses SSN/TSI as the “input”, surface temp as output and sees a notch. But he then goes outside of that and introduces a second input , out of phase with the first to explain the notch. That is inconsistent.
I was hoping you could check my logic about tropical storms being ‘inductive’. Assuming the physical effecs are as I described them, and regarding temperature as the “potential’ do you think I am correct in seeing that as an inductive element?
00
Greg said:
“I think that if there is a notch effect in climate it has to be physical quantities that form a demensionally valid equation. That would seem to imply LCR or delay line.”
Key here is of course your saying “if there is a notch.” As for the delay, would we agree that the recombination of an original signal and a delayed one needs to be (physically) an addition? (I can’t imagine a subtraction mechanism.) The resulting comb response would thus have to null out (or at least cause a dip) at 1/2T, 3/2T, 5/2T etc. So if a dip were found at 1/11-years, is there any evidence for a dip at 1/3.67-years?
Greg also said:
“I was hoping you could check my logic about tropical storms being ‘inductive’. Assuming the physical effecs are as I described them, and regarding temperature as the “potential’ do you think I am correct in seeing that as an inductive element?”
This (tropical storms) is not something I know much about. But temperature being an “intensive” physical quantity (as opposed to “extensive”) would seem to be validly considered a “potential” such as voltage. For an inductor, the voltage is v = L(di/dt) = L(d2q/dt2). So, do you see the same equation? Unfortunately, this is the same form as Newton’s Second Law so is likely to be lurking about any “back-of-the-envelope” scribbling we make. So – do you see temperature as a function of derivatives of something flowing – analogous to voltage as a second derivative of charge?
00
Thanks Bernie, I’d missed the notification of your reply.
To put your question the other way around, is there something that is proportional to the integral of temperature.
Int V.dt = -k.I
Well I have shown that the tropics appears to maintain the degree.day integral. That suggests there is a negative f/b on on degree.days.The “I” in a PID controller.
http://climategrog.wordpress.com/?attachment_id=285
This appears to come from the timeing and density of TS. I don’t think anyone has an equation for that but the system behavious appears to be inductive in nature.
00
A notch is not difficult in nature. All you need is a phase inversion. And for a phase inversion at a specific frequency a delay will do nicely.
00
That’s what I said just above.
“Heat transport by surface ocean currents could provide a delay but surface currents would be subject to all sorts of futher modification. Water sinking in the arctic and popping up 5.5 years later in anti-phase in sufficient quantities to provide a global notch…. pushing the bounds of credibility a bit. ”
A delay, without further phase changing effects or attenuation, would “do nicely”.
Now you just need find one. Perhaps you have a suggstion 😉
00
I do not know the field well enough.
I know this one better: http://spacetimepro.blogspot.com/2014/04/lpc812-devl-20-march-2014.html
00
You are quite correct and you have said this before, and I have acknowledged that you were correct to suggest it. It’s just classic “destructive interference”, full or partial cancellation, 180 degrees out. It should also show up at odd harmonics of the first dip. Easy to postulate – Likely very hard to see. And any propagation path seems very unlikely to be a reasonably pure delay. Easy with CCDs or BBDs on the bench. (Walter Ku and I have a 1982 Aud. Eng. Soc. paper on it – posted in my Electronotes site). Real world – not so much. (Specular reflections of sound off a flat surface such as a runway. Something like that perhaps.)
00
strong signal going to hide for 5.5y and popping back out? It’s getting rather fanciful.
Here I extracted a timeconst of about 8mo from satellite energy budget data
http://climategrog.wordpress.com/?attachment_id=884
That presumably give RC time const.
In fact I think there are two separate processes here. A strong -ve surace f/b which strongly rejects all radiative forcing ( incl AGW ) and a different process governing SW readiation that penetrates deeper and is not sugject to the surface f/b.
The latter one is subject to a much heavier low-pass filter due to very large heat capacity and is controlled by a simple relaxation to equilibrium response.
Here is a relaxation that gives about the right lag, it needs a low-pass to get rid of the 11y ripple.
The ripple in SST is lunar.
http://climategrog.wordpress.com/?attachment_id=981
00
Dr. Evans,
I have come across yet another paper that may help explain your mysterious force X, though the author can not pin down exactly how it all works.
According to the authors,
and from the link –
“Although the mechanism is not understood, the authors find good correlation between their “irregularity index of ISSN [International Sunspot Number]” and the strength of quasi-biennial oscillations, which “dominates variability of the lower stratosphere” and which may in turn control the jet stream and winter polar vortex that led to this winter’s record US cold temperatures. ”
http://hockeyschtick.blogspot.co.uk/2014/07/new-paper-finds-another-potential-solar.html
00
Interesting but seems like a major effort to reproduce since it seems to rely on several other papers and does adequately describe the method used.
It would be more impressive if they were not distorting and inverting the data with a crappy filter before doing the analysis.
It seems odd that people that seem comfortable attacking some fairly complicated maths are not capable of finding decent filter and are , apparently, unaware of distortions caused by running averages.
They repeatedly say they are “averaging” the data which is inaccurate since an average results in less data points. It is clear from the graphs that they are averaging the data.
It is known that the noise level is highest at solar minima and it has been suggested by Ray Tomes that the data should be square rooted before analysis, without giving a justification ( econometrics mentality ).
Taking the square root of SSN does give a more even noise level (high freq variability). This may have more to do with nature of observations where detectability is determined by the angular resolution of the observing optics.
This translates to smallest _linear_ dimension of sunspot features on the photosphere, which is probably not a function of the underlying physical process, more properly related to the area.
This will lead to a larger errors when studying smaller features at solar minima.
fig B2 in the paper shows it is picking stronger ‘irregularity’ at cycles with low solar minima.
It remains to be seen whether findings with arbitrarily selected parameters are not simply detecting the anomalous distortions produced by the running mean filter.
If they repeat the exercise with proper filtering and get similar results it may be worth a closer look.
00
oops:
NOT describing method adequately
NOT averaging the data ( rather a running average, not the same thing !)
00
Here is a rather interesting set of observational data showing how relaxation to equililbrium processes respond to variable insolation.
http://climategrog.wordpress.com/?attachment_id=1000
If we regard 08:15 as the 1960s peak of solar activity, we see the temp response peaks about 08:45 ( cf year 2000 ).
I would say that in 2014 we are at about 09:00 on this graph. Temperatures will continue to cool as they generally have since 2005, but there will not be any sudden drop.
00
For visual comparison here is SSN with a light 5y relaxation
http://climategrog.wordpress.com/?attachment_id=981
By strange coincidence local cloud conditions from 06:00 to 09:00 seem to be fairly analogous to SSN variations since 1830 AD.
There is no “11y cycle” but interdecadal variability seems comparable. So if there is an 11y notch filter in climate or some other filtering process, this graph gives an idea of how a real physical system can respond.
00