I have done a patch to eliminate the jitter of 3D-objects near the viewpoint
(for example 3D cockpit objects).
The problem is the roundoff accuracy of the float values used in the
scenegraph together with the transforms of the eyepoint relative to the
scenery center.
The solution will be to move the scenery center near the view point.
This way floats relative accuracy is enough to show a stable picture.
To get that right I have introduced a transform node for the scenegraph which
is responsible for that shift and uses double values as long as possible.
The scenery subsystem now has a list of all those transforms required to place
objects in the world and will tell all those transforms that the scenery
center has changed when the set_scenery_center() of the scenery subsystem is
called.
The problem was not solvable by SGModelPlacement and SGLocation, since not all
objects, especially the scenery, are placed using these classes.
The first approach was to have the scenery center exactly at the eyepoint.
This works well for the cockpit.
But then the ground jitters a bit below the aircraft. With our default views
you can't see that, but that F-18 has a camera view below the left engine
intake with the nose gear and the ground in its field of view, here I could
see that.
Having the scenery center constant will still have this roundoff problems, but
like it is now too, the roundoff error here is exactly the same in each
frame, so you will not notice any jitter.
The real solution is now to keep the scenery center constant as long as it is
in a ball of 30m radius around the view point. If the scenery center is
outside this ball, just put it at the view point.
As a sideeffect of now beeing able to switch the scenery center in the whole
scenegraph with one function call, I was able to remove a one half of a
problem when switching views, where the scenery center was far off for one or
two frames past switching from one view to the next. Also included is a fix
to the other half of this problem, where the view position was not yet copied
into a view when it is switched (at least under glut). This was responsible
for the 'Error: ...' messages of the cloud subsystem when views were
switched.
was probably my idea of a feature, but if the input buffer actually
does has a length of zero (as Melchior discovered for the case of a
zero-length .nas file) then there will be no null.
Changes
=======
- corrected some strange behavior when playing with the render dialog options
- the density slider is now working : if you are fps limited and still want to see clouds in
the distance you should play with that
- added the choice for texture resolution, its more comprehensible now (before it was
wrongly allways choosing 64x64 textures)
- changed the initial texture size : you now have 64 texture of 64x64, this uses 1Mo of
texture memory (before it was 20 texture of 256x256, that took more memory and there was
not enought impostors)
- sun vector is now right so the lighting is a bit better
- removed useless sort and light computations for impostors, this should save a lot of cpu
- blending of distant cloud is more accurate now
- clouds are now positioned correctly, they don't try to escape you anymore
- no more red/white boxes around cloud
- textures are now filtered (no more big pixels)
known bugs
==========
- distant objects are seen in front of clouds
- new and updated sources for the new volumetric clouds
- 2 new textures for the clouds
- an update to the render dialog to enable/disable and change a few parameters
for the new clouds
I have some small updates to the ground cache.
- Remove the usage of dynamic_cast where it is known that the result will be
non null.
- Renormalize the surface normal in double precision.
- Place the groundcaches center at the point it was requested not at the
scenery center. This fixes the problems with JSBSim's trimming together with
the ground cache. Now I am ready to commit JSBSim's ground cache usage.
I'm looking through the AI code, trying to find the bug that's killing the
thermals. The following things don't look right:
1) AIManager::101 , the Traffic Manager pointer is searched for by name at
every dt. I'll leave this for you to look at.
2) AIManager::295 , the thermal height is not being set. We need to
restore the line: ai_thermal->setHeight(entity->height_msl);
This fixes the thermal problem.
3) AIManager::328 , I changed the fetching of the user state to occur every
sim cycle, and changed the fetching function from by-name lookup to a lookup
by node pointer. It should be faster now, and more accurate too. This helps
the air-refueling.
There weren't changes to this script in a while -- it almost looks
like dead code, but isn't. I'm using this regularly. valgrind works
better than ever (version 3.0 coming out soon, and the alpha already
very usable). New address: http://www.valgrind.org/
have a "property" mode as well as the original "binary" mode. The property mode
will allow the remote module to request any set of properties, and it will send
those properties each frame. The remote module can reply with a list of arbitrary
property name/value pairs to update on the FlightGear side.
This is a first stab, so it's not the cleanest, most well concieved code, but it
allows an external module (communicating via a pipe) to have a huge amount of
flexibility in the data in can access and update.
Make SDL window resizable; This exposes the same problem that many
GLUT users have: resizing up may cause a temporary switch to software
rendering if the card is low on memory. Resizing down again switches
back to HW rendering. (KSFO is texture intensive, but there are no
problems in LOWL, and elsewhere.) Less and less users will have the
problem as cards become better, and it's no reason not to allow
resizing altogether.
_course_deg is first initialized in the if()-branch (gps.cxx:419). But
this branch isn't entered at first run if wp0==wp1, so that in line 615
fgfs tries to SG_NORMALIZE_RANGE() a random value, which can take a
long while if the number huge. This was occasionally a number greater
than 10160!
- initialize all vars before they are used (fixes endless loop)
- fix some compiler warnings (initialization order, unused vars)
FGAIMgr::GenerateSimpleAirportTraffic() tries to determine the airport's local
hour from the /sim/time/gmt-time string, which fails, because at this time the
property is still empty. That's why I don't get ATIS at LOWG (where it is *not*
midnight right now. :-) -> use sg's get_cur_time() instead
- don't treat *every* child in the xml as submodel, especially not a "param"
block
- do not only *enable* the contrail flag above some altitude, but also
disable it below
showDialog() is careful not to create a new FGDialog() if a dialog with the
same name is already open (active). But at this point it is already too late:
newDialog(), which was called shortly before, has already overwritten the
dialog properties. This leads to animated garbage in the best case, and a
segfault in format_callback() in the worst case.
- GUI::newDialog(): Don't you overwrite properties of an active dialog!
- GUI::readDir(): You may do that, but delete the old dialog first!
(necessary for reloading the GUI)
- FGDialog::makeObject(): only set format_callback() with setRenderCallback()
if the property is "live". Otherwise, only call it once at construction
time. This isn't only a performance improvement. Without this the label
was growing until it hit the limit (256).
The previous message wasn't totally correct. Strings are now allowed, too. And
the pattern is now '[ -+#]?\d*(\.\d*)?l?[fs]' and *may* be embedded in a string.
There may only be one %s or %f, though. %% is allowed in the preamble/postamble.
(Yes, %ls is allowed, too, and treated as %s.)
Also, "end" is superfluous now.
Printing floats in dialogs with 8 digits after the comma is inappropriate
for most cases.
- implement a "format" property for "text" gui elements (a.k.a. pui label).
Number formats are set by strtod/snprintf, while formats on non-numbers
are replaced by "%s". Practical example in the upcoming material.nas update.
Valid formats regex: '%[ -]?\d*(\.\d*)?l?f' (IOW: the format must begin
with '%' and end with 'f').
# Nasal:
number = dialog.addChild("text");
number.set("label", "3.1415926");
number.set("format", "%.3f");
The dialog handling has been written at a time when only one dialog was
shown at the same time, and dialogs were shallow -- with only children, but
no grand-children. This makes finding a draggable spot on modern, dialogs
with nested objects quite a challenge. The patches fixes this, and other things:
- check full object tree on button press, not only the outmost layer;
and don't give up just because we are in *something* (which could well be
something harmless, like a group); only ignore a few, sensible objects
(we don't want to drag after a click on a button or into an input field)
- don't lose dialogs as easily when dragging too fast (it does still happen
if one manages to enter an editable field while dragging, but this is
a plib problem and I don't feel like fixing that now :-)
- don't "live"-update input fields while they are in edit mode
- remove some "if (foo) delete foo;" redundancy
popping up and crashing when the B-29 model is in use. This isn't the
right solution; we should fine the NaN condition. But it's harmless
and allows development with the B-29 to continue.
back, but forgot to put the same fix into the runtime code. Also
added some comments so I don't get confused again the next time I come
through here. :)
I have done some cleanup where I moved some values out of classes where they
do not belong and such stuff.
Also the fols offsets are now named in the carrier xml file with a more
verbose name (flols-pos/offset-*) than before (only offset-*).
There is a little preparation for definitions of parking positions on the
carrier which should later be used for starting flightgear directly on the
carrier.
a safe undersleep() to conserve cpu. Essentially we undersleep our target by
just a bit (to avoid the chance of oversleeping.) Then we finish off the
remaining time slice with a busy-wait loop.
Norman Vine wrote :
> Frederic Bouvier writes:
>
>> Quoting Andy Ross:
>>> * Hopefully in a CPU-friendly way. I know that older versions of
>>> the NVidia drivers did this by spinning in a polling loop
>>> inside the driver. I'm not sure if this has been fixed or not.
>>>
>>> From my experience, the latest non-beta Windows NVidia driver seems to eat CPU
>>
>> even with sync to vblank enabled. The CPU usage is always 100%.
>
> Buried in the PPE sources is a 'hackish' but portable way to limit CPU usage if the desired framerate is met
>
> /*
> Frame Rate Limiter.
>
> This prevents any one 3D window from updating faster than
> about 60Hz. This saves a ton of CPU time on fast machines.
>
> ! I THINK I MUNGED THE VALUE FOR ulMilliSecondSleep() NHV !
> */
>
> static ulClock *ck = NULL ;
>
> if ( frame_rate_limiter )
> {
> if ( ck == NULL )
> {
> ck = new ulClock ;
> ck -> update () ;
> }
>
> int t_ms = (int) ( ck->getDeltaTime() * 1000.0 ) ; /* Convert to ms */
>
> if ( t_ms < 16 )
> ulMilliSecondSleep ( 16 - t_ms ) ;
> }
>
>
I implemented the method pointed out by Norman. It works great on windows and saves me a lot of CPU cycles. This way, I can get the same framerate in moderately populated areas and have CPU idle 50% of the time instead of wildly looping in the NVidia driver while waiting to sync on vblank.
It has been tested on Linux by Melchior. He saw the same gain in CPU cycles.