Oh, definitely. The linkages in the sikorsky that control angle of attack are terribly complex. In a drone where the blades are ducted you can effectively run the system on the outside edge of the blades, which should be a lot simpler, though the control strategy is still ugly.
- - - Updated - - -
Microturbines are much higher specific power than any engine. Fuel Cell also looks great for that metric.
"So you've done this before?"
"Oh, hell no. But I think it's gonna work."
Re drones, a major factor will also be cost. Fixed props are cheap and super easy to make, since flight control is just software and a cheap chip. Variable pitch = complex, expensive, much higher rnd time.
https://www.instagram.com/vijil/
I draw guns and spaceships and bunnies
"So you've done this before?"
"Oh, hell no. But I think it's gonna work."
I don't think we're quite that far along yet. The best to hope for would be centralized programming already being done on the control platform you're using, and then a lot of the strategy is dictated by the control modules you pull in. I don't know if there is a platform established out there that already has a lot of these logic blocks already built. Also, when using modules and blocks to build up a control strategy, you're going to suffer from higher "overhead" on your processor - because the objects have to be written to handle as many possible generic situations as they can, the processor will have to check a lot of unnecessary code that won't be optimized for your control system. Newer processors do handle this problem better, but it's still the same tradeoff that we've seen in industrial control over "how much can you do?" vs. "how many processors can you afford?" As soon as a faster processor comes along or RAM gets even smaller, the drive to add more module functionality increases.
We definitely aren't in a place yet where the typical control program application can begin "learning" how to improve control in the same way that an IBM program learns to master checkers or Jeopardy. I don't doubt it will be here in my lifetime, though.
That's just not very accurate, reinforcement learning as a control approach in robotics is not new and doesn't require a lot of overhead once the training phase is complete. The biggest issue, as in most machine learning models, is generating a data set and training it.
In the given drone example, starting with a few "pure" vector settings and then allowing the reinforcement learning to handle the complex interpolation would be one somewhat accelerated approach.
I think you'd be surprised where that field is.
Interesting. I'll look into it more. What I see in machine control for industrial applications is almost never what I'd typically consider as "robotics", and there is nothing like reinforcement learning going on with any control platform on the market. Certainly not ours, and no worries from the competition.
my body is ready:
https://www.youtube.com/watch?v=UIJSU8wA67I
still will b focused on GTLM, but curious to see how the american DPis fair against the euro-spec LPM2s.
Last edited by cockerpunk; 01-24-2017 at 02:25 PM.
social conservatism: the mortal fear that someone, somewhere, might be having fun.