Has anyone used thermal imaging and a thermal modeling package to do real time transient compensation?
this is a fundamental issue with machine design, and the source of all precision engineering working. how to make accuracy from not as accurate.
the simple answer, is reversal of measurement. you can make a machine tool more accurate, if you measure and compensate it properly. you cannot make a machine more accurate than your sensor noise floor, nor more accurate than the dynamics noise floor (temp, vibration etc). but you can make a machine tool more accurate, simply by measuring it intelligently.
this is, in essence, free accuracy. AKA, the best kind. the kind your boss likes.
here is how it works:
imagine we wanted to make a perfect right angle. we go to our carpentry store, and buy ourselves a square. after some use with this square, and double checking our "squares" using there diagonals, we note that our squares are not square.
what do you do?
we reverse the square.
we take a sheet of paper, and draw a line for the base and a line up the square. now, we flip the square over, and line up the base with the line we drew for the base, and draw another line up the square. these lines will not be parallel and there error will be DOUBLE the error in the square. now, if we are clever, we can warp the square by again, HALF the measured error, and our square will now be square.
the same thing can be done with a machine tool.
if we want to make our linear axis straighter, say Z on a lathe, most folks would go and buy a straightness artifact, and use it to map there axis, subtract out the error in the control system, or modify mechanically the axis, and call it straight. in reality, all they have done is make there axis have all the errors there artifact had in them. this is like buying a second square made on the same machine as the first. its dumb.
no, instead we can use the machine itself, to straighten itself intrinsically.
toss a part in C, cut it as straight as you can with Z, and then measure it. you could measure the diameter, an note that the diameter does not stay constant. if your spindle is good, and chucking is good (as we learned in the last lesson), then this is Z axis error. measuring the diameter and then compensating Z at this position through its travel will straighten Z out. we must remember that we are measuring DOUBLE the error, when making this compensation.
you can repeat this process several times until your diameter matches all along the cylinder.
a rookie will put his indicator on his freshly cut surface, and run it back and forth, and say, hey, my axis is perfect! but he has merely proven his axis is repeatable, not that it is straight. he has measured the same surface he cut, with the axis he cut it with ... this should always be read 0 because the machine repeats.
but if he puts his indicator on the Z axis on the BACKSIDE of the part, this will again, give him DOUBLE his straightness error on his machine. this is the preferred technique as its easier to do than measuring diameters. this only requires one indicator.
using this reversal technique you can make anything better, with almost anything in the chuck.
remember as always your transients, and the fact that you can only make a machine as good as the transient noise floor will allow, and his sensors resolution.
now, we talked about spindle error previously. yes, you can use reversal to do spindle error analysis as well. however, this is almost always not the preferred techinque, because on any scale that it matters, your transients (temp etc) will play a large factor in your calculations, whereas the multi-senor approach is nearly immune to transients (assuming spindle performance demand is constant).
social conservatism: the mortal fear that someone, somewhere, might be having fun.
Has anyone used thermal imaging and a thermal modeling package to do real time transient compensation?
Long term gradient formations? Temperature gradients?
bumping this thread because we recently found ourselves in a situation like steve was talking about. we are attempting to build a piece of equipment with performance well under the noise floor of our machine.
the main problem with modeling and compensating is that the model becomes far too complex. any assumptions made (even meshing is an assumption), creates errors, and when your working on this level all those errors matter. otherwise your computation time skyockets. not to mention its tough to fully understand your initial conditions, because nothing is initial. how long ago did someone transfer heat to the machine by touching it? what state was that motor in (because motors generate heat) 10 minutes before you started?
so rather than model and attempt to predict what will happen, what we instead did was measure what happened. we do some kind of "pre-cut" to knock out any initialization errors, we perform to operation, with a reference frame metrology setup, measure what happens, and then off line we change our command, knowing what happens, in order to generate the shape we want.
now, this works pretty well for things like thermal, however, this will still not work for error introduced by the servo error, and sensor drift errors. so there is still a noise floor, but you can cheat the machine's thermal issue, if you have enough time, and you have a good enough sensor and metrology reference frame.
social conservatism: the mortal fear that someone, somewhere, might be having fun.
Would it be possible to use machine inputs and outputs in a black box (neural) model for error compensation?
They use a similar technique to avoid continuous iteration with high compute times:
https://www.google.com/url?q=https:/...sfVM4HbGxYBahw
You've got the input data (sensors) and the output metric (accuracy, or error). Seems to me like you won't ever get a predictive model to be better than the best learned compensatory model.
If the machine time is too expensive to run up a scouting data set on, or the input space is too high dimensional, then of course disregard the above
a neural model is exponentially more computationally intensive than a 1st principles model.
social conservatism: the mortal fear that someone, somewhere, might be having fun.
I suspect that's true for a huge number of applications.
Neural models are still kludgy at best, and even when they get better they won't always be appropriate. If you're working on a system with linear functions (like temperature), and just a few scalar ins and outs, there's no reason to introduce a neural network - your computer is doing nothing more than addition and multiplication, something a binary calculator is exceedingly good at. A huge portion of science engineering fits into this category, of simple, low-dimension linear relationships.
It'd be like using a quantum computer to run MS Word. You don't need a neural net to run a thermostat, in fact it's probably a bad idea.
Last edited by PBSteve; 01-27-2017 at 12:39 PM.
Comfort the afflicted and afflict the comfortable.
I work for the company building the Paragon...once we figure out a name