CN1414496A - Universal virtual environment roaming engine computer system - Google Patents

Universal virtual environment roaming engine computer system Download PDF

Info

Publication number
CN1414496A
CN1414496A CN 02130736 CN02130736A CN1414496A CN 1414496 A CN1414496 A CN 1414496A CN 02130736 CN02130736 CN 02130736 CN 02130736 A CN02130736 A CN 02130736A CN 1414496 A CN1414496 A CN 1414496A
Authority
CN
China
Prior art keywords
camera
scene
roaming
viewpoint
virtual environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 02130736
Other languages
Chinese (zh)
Other versions
CN100428218C (en
Inventor
郝爱民
沈旭昆
梁晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Beijing University of Aeronautics and Astronautics
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CNB021307369A priority Critical patent/CN100428218C/en
Publication of CN1414496A publication Critical patent/CN1414496A/en
Application granted granted Critical
Publication of CN100428218C publication Critical patent/CN100428218C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

The invention relates to area of computer virtual reality technique. The implementation approaches includes: creating general application resources of the virtual reality, loading the database file accorded with the scene description, the mechanism for customizing the roaming states, setting the input mapping and the explaining mechainsm, topographic matching as well as supporting the selection of using the two dimension input device and controlling the object in the three dimension scene etc. In the invention, the scene view is observed by using the model of the double cameras. The functions of the collision detection, the roam control and the reduction of the complexity of the multiple scene etc are supported. Its advantages are integrated functions, clear interfaces.

Description

A kind of universal virtual environment roaming engine computer system
Technical field
The invention belongs to the computer virtual reality technology field, a kind of general roaming of reality environment specifically drives computer system.
Background technology
Because based on city planning, virtual construction, the important feature of scene walkthrough and the huge society that will produce, the economic benefit of virtual reality technology and the fundamental change that related industry is brought, some western developed countries begin to drop into the research of substantial contribution and personnel's virtual support reality aspect from the mid-80.Wherein mechanisms such as U.S.'s advanced techniques center, Ya Telan design corporation, North Carolina University, Pai Ladimu company have all obtained many achievements in theory and practice, and begin to release the comparatively expensive special-purpose business software kit of preliminary practicality, price.
At present, formed two different research directions, promptly utilized to measure representing and drafting of dimensional data based on how much scene drawing with based on the virtual scene of source image sequence in the forward position of three-dimensional scenic roaming.Based on how much virtual reality development early, high speed development along with computer technology, no matter be the graphics acceleration card of hardware aspect, or D modeling tool, the three-dimensional picture environment of software aspect, all there are numerous companies, scientific research institution to release multiple solution, look the graphics acceleration card ratio of performance to price of calculating scientific ﹠ technical corporation as the U.S. and constantly promote significantly; The modeling software MultiGen II Pro of Ma Erte brocade company and the industrial standard that database format OpenFlight has almost become emulation circle thereof; VEGA (Wei Ge), Performer (performing artist) figures system have developed into powerful, wieldy business software.But virtual reality technology development time after all is not long, still has many problems to be solved that have based on how much virtual reality technologies.Comparatively speaking, start late based on the virtual reality technology of image, its main direction of studying is complexity and the artificial intensity that reduces based on the scene modeling of how much virtual reality technologies, reduces the harsh requirement of virtual reality technology system to hardware performance.But handle at pseudo-entity, have aspects such as the man-machine interaction of force feedback, collision detection and response and also have a lot of theories will do further research with physical characteristics based on the virtual reality technology of image.
Caro comes that scene walkthrough operating room, university Berkeley branch school to be engaged in scene walkthrough research in the world early and obtain one of scientific research institution of outstanding achievement, and they begin to carry out the real time roaming strategy study of complex model nineteen ninety.1996, on Power series 320 workstations of SGI (U.S.'s graphics workstation is given birth to manufacturer), realized that department of computer science newly is the real time roaming in building Soda Hall (Sarasota auditorium).Soda Hall model is made of 1418807 polygons, occupies 21.5 million hard drive space, and model has used 406 kinds of materials and 58 kinds of different texture.Because research group has adopted efficiently data store organisation, multilevel hierarchy details technology, scene dispatching algorithm, multiple technologies such as viewing area decision algorithm and pre-calculation process in real time, make the real-time simulation rate of Soda Hall be p.s. refreshing frequency constant about per second 20 frames.In years of researches work, University of North Carolina scene walkthrough operating room proposes and constantly perfect UNIGRAFIX scene database, develop some Software tool bags, transferred data converter, object multilevel hierarchy details technology automatic generator and the automatic Core Generator of simple and easy scene etc. of UNIGRAFIX form as the AutoCADDXF data layout to.But for domestic, it is in fact impossible to want to reuse a complete roaming engine at code level.
Domestic also have some research institutions to be engaged in the scene walkthrough Study on Technology, and wherein the Forbidden City roaming that realizes with Hangzhou University industrial psychology research department is representative.The Forbidden City roaming adopts bicycle as interactive tool, allows the ramber ride in virtual the Forbidden City in fact as you werely.The Forbidden City roaming mainly represents is that the outward appearance of buildings is an outdoor scene, its indoor section has been done a large amount of simplification, adopted modeling method based on the texture technology, functions such as collision detection and collision response relatively a little less than, this system does not provide the operating function of pseudo-entity.
Summary of the invention
For overcoming above-mentioned shortcoming, the object of the present invention is to provide a kind of universal virtual environment roaming engine computer system, can realize the basic but important general character function in the virtual environment roaming, and encapsulated a collection of gordian technique preferably, reduced the difficulty of general user's development of virtual real world applications system, realize reusing of systemic-function at code level, both helped the standard exploitation, can reduce construction cycle and expense again.
For achieving the above object, universal virtual environment roaming engine computer system of the present invention, by personal computer, graphics acceleration card, walker, stereoscopic vision shows and tracking equipment, scene database and roaming engine core component constitute, its implementation comprises: create general virtual reality applications resource, loading meets the contextual data library file of scene description standard, customization roaming state mechanism, input mapping and explanation facility are set and all accept once outside input at each frame of simulation cycles, support multiple input equipment access roaming system, carry out viewpoint control according to outside input command, manipulations of physical and state setting, in the viewpoint control procedure, implement the several scenes complexity and subdue strategy, carry out collision detection and terrain match, support use standard 2-d input device to select and handle the object in the three-dimensional scenic, multiple roaming state of a control and environment special efficacy are set, satisfy the step of requirements of different users.
The viewpoint control that viewpoint control is to use the double camera model to roam, create and observe view camera and walking view camera, observe view camera and be subject to the walking view camera, the walking camera is in order to realize the multiple spot collision in the roam procedure and to keep direct of travel that observing camera has three rotational freedoms; If viewpoint is controlled in the motion that further is included in view camera and walking camera and is detected collision, then judge the character of collision, if scene chain of command, then start the scene scheduling strategy, carry out the management and running of indoor and outdoor scene, if steerable movement entity then carries out the motion control of entity in the scene by moving coordinate system that defines in the model and kinematic parameter, if confirm collision has taken place, then provide the step of collision response; Input mapping and explanation facility are in order to isolate roaming engine and input equipment, to reduce the influence of input equipment to greatest extent to the roaming system kernel, from viewpoint control and functions of the equipments, take out a middle layer, the control of input equipment is mapped to this middle layer and makes it become the Controlling Source that viewpoint drives; In whole roam procedure, support several scenes height and complexity to subdue strategy simultaneously, comprise by the precision of the multi-level detail model of distance scheduling of observing view camera and each model of place and replacing, determine the scene chain of command with three dimensions plane in the scene database and behavior parameter, carrying out the scheduling of indoor and outdoor scene and complex indoor scene divides, in roam procedure, after detecting the model chain of command based on the collision detection algorithm of sight line, the control behavior identification (RNC-ID) analytic that defines during with modeling comes out, being mapped to the scene controlling object concentrates, according to the concentrated list object of controlling object, load viewable objects, unload invisible object, realize the scene management and running, automatically subdue the redundant polygon in the solid model when model loads, generate the texture of solid object surface; Collision detection further comprises the reason of analyzing camera motion with response, set up the walking camera and observe camera collision detection line segment forward, observe the camera collision detection, walking camera collision detection, if bump, then carry out the scheduling of virtual environment scene and handle, if do not bump, then camera advances, and carries out the step of terrain match; The terrain match technology comprises that further the walking camera advances, if viewpoint position changes, call the collision detection module, collision detection result is for bumping, then ignore viewpoint position and change, walking camera and observation camera rising certain altitude are provided with the line of vision of observing camera by the topographical surface direction simultaneously, if the result is not for bumping, then viewpoint arrives the step of a reposition; State is provided with function and comprises ON/OFF atomizing effect, ON/OFF two-dimensional map guide, ON/OFF collision detection, selects the transparent processing mode, one or more of weather conditions etc. are set; The environment special efficacy comprises with the atomizing step of the constant frame frequency method that is technical support; Constant frame frequency method further comprises takes out that every frame on average consumes the time in preceding 2 seconds, calculate current frame frequency, calculate the poor of current frame frequency and target frame frequency, if current frame frequency is greater than the target frame frequency then increase the atomizing visibility, otherwise reduce the visibility that atomizes, new atomization parameter and end are set.
Characteristics of the present invention are: 1, telotism, interface is clear.Use the present invention to develop the scene walkthrough system, its workload is equal to the establishment scene database, and the roaming drive part must rewrite code hardly again.2, adopt the double camera model to carry out what comes into a driver's observation, collision detection and roaming control.The double camera model not only can solve the collision detection problem of the short barrier in ground effectively, also can solve the line of vision problem of terrain match on gradient road surface simultaneously preferably.3, the several scenes complexity is subdued the support of strategy, particularly supports the dispatching algorithm based on the scene chain of command of original creation.This algorithm has guaranteed the continuous roaming problem of indoor and outdoor in conjunction with scene, can improve the real-time of roaming effectively, and simultaneously, algorithm is simple, is convenient to grasp use.4, follow virtual reality industry data standard, in roaming engine, encapsulated explanation to the multiple description of virtual scene, the behavioural characteristic of roaming being advanced as the movement definition of door, window and the object that is above the ground level, if step can carry out terrain match, viewpoint is raise automatically, if the greenery patches fence then can not pass through, need detour or the like.5, provided the mechanism of input mapping, supported multiple input equipments such as mouse, keyboard, joystick, walker, helmet tracker simultaneously, and can expand input equipment easily and insert.6, realized based on the constant frame frequency technology that atomizes, controlled the quantity of visible how much dough sheets in the scene, and then adjusted and roam frame frequency by the concentration of reconciling mist.
Virtual environment roaming engine computer system of the present invention compared with prior art, its beneficial effect is: it is a complete relatively roaming engine, input mapping and viewpoint control have been realized, the virtual scene management and running, multi-level detail model is switched, texture, illumination, terrain match, collision detection and response, manipulations of physical, two-dimensional map guide, engine function such as constant frame frequency method, compare with two domestic typical roaming engines, has remarkable advantages, the roaming system of allomeric function and U.S. Berkeley University is suitable, though function is powerful not as it aspect the large scene processing, but the dispatching algorithm based on the scene chain of command among the present invention realizes simple, function can satisfy the requirement of medium-sized indoor and outdoor scene, compare with the roaming system of Berkeley University, need not the preprocessing process of scene, simultaneously, can on the personal computer of basic configuration, finish the real time roaming of scene.Aspect application system development, needn't do concrete program development job again, only need to be concerned about the process of setting up of scene, it is the virtual environment roaming engine of a highly versatile, telotism, advanced technology.
Description of drawings
Fig. 1 illustrates universal virtual environment roaming computer system main program flow chart of the present invention;
Fig. 2 illustrates input equipment mapping of the present invention and viewpoint control chart;
Fig. 3 illustrates the viewpoint controlling models figure in the roaming engine of the present invention;
Screen area was divided synoptic diagram when Fig. 4 illustrated mouse of the present invention input;
Fig. 5 illustrates model chain of command scene scheduling controlling workflow diagram of the present invention;
Fig. 6 illustrates the model chain of command definition figure that the embodiment of the invention passes in and out certain buildings;
Fig. 7 illustrates the terrain match technology synoptic diagram in the viewpoint control procedure of the present invention;
Fig. 8 illustrates two-dimensional map multi-channel model figure of the present invention;
Fig. 9 illustrates the constant frame frequency algorithm pattern that the present invention is based on atomizing.
The major control state table of table 1 for defining in the general roaming engine framework;
The device map of table 2 for carrying out with keyboard adopting when viewpoint is controlled;
The device map that table 3 adopts when carrying out viewpoint control with mouse;
Table 4 is a general roaming engine system keyboard menu.
Embodiment
Below in conjunction with the drawings and specific embodiments the present invention is elaborated.
Universal virtual environment roaming engine of the present invention be one can independent operating roaming engine, the software platform of employing is Virtual Studio C++6.0 and OpenGVS4.3 software, operating system is Windwos2000.
Consult Fig. 1, the present invention at first will create general virtual actual environment resource, these resources are scenes, entity, frame buffer etc., load the contextual data library file that meets the scene description standard then, receive outside input and form the input mapping, import the explanation of mapping afterwards, the input mapping is explained and is divided into the state setting again, viewpoint control and manipulations of physical, the state setting comprises system state is set, two dimension guide to visitors and environment special efficacy, the control of viewpoint control viewpoint comprises dispatches scene, collision detection, with the control of terrain match, judge whether to withdraw from release resource with whether in fact by viewpoint control.During customization roaming state mechanism, input mapping and explanation facility are set and all accept once outside input at each frame of simulation cycles, support multiple input equipment access roaming system, carry out viewpoint control, manipulations of physical and state setting according to outside input command, implementing the several scenes complexity in the viewpoint control procedure subdues strategy, carries out collision detection and terrain match, support use standard 2-d input device to select and handle the object in the three-dimensional scenic, multiple roaming state of a control and environment special efficacy are set, satisfy requirements of different users.
Consult Fig. 2,, be provided with the status mechanism of customization roaming in the roaming engine of the present invention in order to adapt to the various demands that different roamings are used.Most of virtual reality applications all are the application of OPENGL compatibility, and the inside realization of OPENGL itself is exactly a kind of status mechanism.So, status mechanism is set has in the present invention promptly kept consistance with graphics system, increased the dirigibility of roaming system again.
But the most functions that define in the general roaming framework all are defined as options, that is to say, the ramber can open or close some function according to the needs of oneself, whether carries out collision detection, selects transparent processing mode etc. as ON/OFF atomizing effect, ON/OFF two-dimensional map, decision.On the other hand, the ramber can also be provided with the original state of roaming system, such as the weather conditions (fine, cloudy, the moon) of the step-length of the initial position of observing camera, speed step-length that camera is advanced, corner, system simulation, period (morning, noon, at dusk) etc.When realizing the control of roaming system viewpoint, the mode that adopts direct fetch equipment information more and handle.Along with the development of a large amount of I/O equipment and new mutual control technology, will have increasing interactive device and obtain using.Because the diversity and the extensibility of the equipment that is equipped with bring complicacy can for undoubtedly the design and the maintenance of software.Table 1 has defined the major control state in the roaming engine framework of the present invention.
For these reasons, the present invention has defined a basic device map, promptly takes out a middle layer from viewpoint control and functions of the equipments, and the control of input equipment is mapped to this middle layer and makes it become the Controlling Source that viewpoint drives.
As can be seen, the control signal of input equipment is mapped as steering order, makes the viewpoint control and the direct control signal of input equipment separate, and becomes independently functional module.Use this " isolation " technology, both made things convenient for the expansion of roaming system, kept the relative independence of roaming engine again input equipment.
With reference to figure 3, in roaming system, viewpoint is " incarnation " of human eye, and its function is identical with video camera or camera in the real world.Viewpoint control typically refers to the motion control of observing camera.Viewpoint control mould adopts distinguished two groups of cameras to simulate the motion of human eye and pin respectively.The camera of simulation human eye is called the observation camera, and the camera of anthropomorphic dummy's pin is called the walking camera.The limitation of movement of observing camera is in the walking camera.General roaming system only is provided with the observation camera, the purpose that the present invention is provided with the walking camera has two: the one, in roam procedure, realize the multiple spot collision detection, the 2nd, keep the direction advance with the walking camera, and allow to observe camera more rotational freedom is arranged, observe camera except along the Y-axis left-right rotation, can also rotate the action that the anthropomorphic dummy comes back, bows along X-axis up and down.The basic parameter of walking camera comprises: apart from the height (walker_height) on ground, the step-length of advancing or retreating (step) is along the rotation step-length (θ) of Y-axis.Being provided with of walker_height can improve the roaming system overall performance, because the walking camera will participate in the collision detection of viewpoint control procedure, when walker_height was set to non-0 value as 0.10, the entity and the viewpoint that are lower than 10cm in the virtual environment can not bump.Behavior corresponding to real world is that all obstacle such as thresholds in certain buildings that are lower than 10cm are that the people can cross over.Like this, roaming system will reduce a large amount of invalid computation, thus the overall performance of elevator system.The basic parameter of observing camera is basic identical with the walking camera: apart from the height (eye_height) on ground, the step-length step that advances or retreat is along the rotation step-length θ of Y-axis.In addition, observe camera and also be provided with another rotational parameters, promptly along the angle step α of X-axis rotation, its allows the ramber to come back, bow and observe three-dimensional scenic.
Also three parameters that are used for collision detection have respectively been defined with the walking camera in the viewpoint controlling models for observing camera, v_p0, v_p1, v_p2 and w_p0, w_p1, w_p2, represent respectively a distance on the current position of camera, the working direction a bit, on the direction of retreat same distance place a bit.The viewpoint controlling models has defined camera motion: observe the camera motion set: { FORWARD, BACKWARD, TURN_LEFT, TURN_RIGHT, LOOK_UP, LOOK_DOWN}; Walking camera motion set: { FORWARD, BACKWARD, TURN_LEFT, TURN_RIGHT}
Hence one can see that, and viewpoint control is walking camera and the motion control of observing camera.And two kinds nothing more than of the modes of camera motion: translation and rotation.The present invention is with the variation of the definition of the displacement step-length step on direction of visual lines camera position, with the variation of camera along the anglec of rotation step-length θ and the α definition direction of visual lines of coordinate axis.Therefore, the mapping problems of input equipment is converted into the problem of input device controls amount to the mathematic(al) manipulation of camera motion type and step, θ, α.
The mapping of keyboard input is the simplest, adopts the directly method of mapping, is example to observe camera, and is as shown in table 2.
Screen area is divided synoptic diagram when consulting the input of Fig. 4 mouse, and the viewpoint control mapping of mouse input is then different, and the method that the present invention has adopted screen area to divide is discerned the control function of a mouse input.Remember current mouse position for (xm, ym), by the relevant principle of device coordinate conversion as can be known, xm ∈ [1,1], ym ∈ [1,1], and the lower left corner coordinate of two-dimensional display curtain is (1 ,-1), the upper right corner is (1,1).So, by division screen shown in Figure 5 is ten zones, be labeled as zone 1 respectively to zone 10, and establish: during with mouse device control viewpoint, camera maximal translation step-length is MAXSTEP, and maximum rotation angle step-length is MAXROTATE, makes speed=MAXSTEP*fabs (ym), rot=MAXROTATE*fabs (xm), then the mapping of mouse device is as shown in table 3.Other input equipment mappings are similar substantially to the mouse device mapping.
Describe community roaming virtual environment, especially the buildings indoor environment of an intermediate complex based on how much formula virtual reality technologies, generally will spend several ten thousand and even hundreds thousand of solid object surface triangles.Because the restriction of conditions such as graphic hardware performance, computer physics internal memory, CPU frequency, the virtual reality system of not taking scene complexity to subdue strategy can't guarantee the real-time, interactive roaming.Adopt the several scenes complexity to subdue strategy, can improve the overall performance of roaming system.This strategy comprise based on the model chain of command scene dispatching method of collision detection, redundant polygon subdue, several method such as texture, multilevel hierarchy detail model.
Consult Fig. 5, the scene management and running have two aspect implications, on the one hand, because the restriction of computer physics memory size can't once all be called in the scene database of a complexity among the internal memory sometimes.At this moment, the scene management and running are meant that piecemeal loads, the unloading model data.On the other hand, because computer hardware, the especially restriction of graphics acceleration card performance, playing up too much polygon among a frame also will be above the limit of system processing power.Reducing the polygon quantity that a frame plays up is the implication of another aspect of scene management and running.
Classical indoor scene scheduling controlling such as PVS (shortsighted collection, Potentially Visible Set) algorithm etc. mostly are divided into roam procedure and scene scheduling processing two stages to carry out.For a certain specific virtual environment, above-mentioned algorithm carries out spatial division with buildings by the divisional plane that the metope in enclosure room, floor, ceiling etc. are parallel to the world coordinates axle under " off-line " state, form Cell (space cell), mark viewing area Ru Men, window etc. then on the space cell after cutting apart, calculate one by one again Cell to Cell and Cell to the visibility region of entity, at last result of calculation is stored in the scene database.Like this, in the roaming stage, the roaming driver need not to do the judgement of complicated visibility region again, directly uses the result of precomputation to carry out the scene scheduling controlling, reaches live effect preferably.But pre-computation methods also exists some significantly not enough, and depending on virtual environment, precomputation result as spatial division and visibility region precomputation, to take bigger scene database space, precomputation workload bigger etc.Especially for the hollow type buildings, the efficient of precomputation visibility region algorithm is relatively low.
Roaming engine of the present invention has proposed a kind of model chain of command scene dispatch control method based on collision detection.This method organically combines scene modeling and roaming control, determines visibility region with the subjective assessment standard, has reached effect preferably.
As can be seen, the model chain of command was set and together is stored in the scene database with other contextual data in the modelling phase.Chain of command is a space plane, is determined by space equation Ax+By+Cz+D=0.Different with general space plane is, control and in modeling process, defined some specific properties, and wherein of paramount importance is exactly the behavior of scene scheduling controlling.The control behavior has only defined a sign in the modelling phase, and concrete control behavior is resolved by roaming engine and determined.
In the roam procedure, after detecting the model chain of command based on the collision detection algorithm of sight line, the control behavior identification (RNC-ID) analytic that defines during with modeling comes out, being mapped to the scene controlling object concentrates, and according to the concentrated list object of controlling object, load viewable objects, unload invisible object, realize the scene management and running.Scene controlling object collection is finally determined by subjective criterion by the practice of roaming program repeat.Determine that promptly at a certain chain of command place, which object is a viewable objects, which is invisible object, records in the data file of roaming driving.
Fig. 6 is the model chain of command definition figure of certain buildings of turnover, when roaming viewpoint during near just in front of the door model chain of command of certain buildings, collision detection algorithm provides the control behavior of the chain of command regulation at this place, as enter buildings, after the behavior parsing module is analyzed, find out corresponding scene scheduling controlling object set, very naturally, should close buildings appearance profile model, call in the buildings indoor model, the visible and invisible object to six floors carries out corresponding demonstration and the processing of forbidding showing simultaneously.
The model chain of command adopts hexa-atomic group of (A, B, C, D, arg 1, arg 2) mode describes, a plane Ax+By+CzD=0 in A, B, the unique definite three dimensions of C, D wherein, arg 1, arg 2For the given control behavior parameter of chain of command, be delivered to control behavior parsing module by detection algorithm.Be provided with a plurality of model chains of command in certain building scenes model, Fig. 6 has provided the scene scheduling controlling signal that passes in and out certain buildings.Six model chains of command have been set the scene scheduling behavior that passes in and out certain buildings.In addition, also near stair place that each floor switches and floor, be provided with chain of command near exterior window, when being in two layers in buildings when viewpoint, other floor is except that objects such as the window that constitutes Zhongting, door, wall, most of object is all invisible, floor switching controls face can be controlled this class scene management and running, significantly reduces the polygonal quantity of drawing among the frame.
A cardinal rule of three-dimensional scenic modeling is to use minimum polygon to obtain identical visual effect true to nature.Often there is redundant phenomenon in the surface data of describing solid model, and the entity of modeling too data redundancy can take place when carrying out model integration separately.The surperficial polygon of eliminating these redundancies can reduce the complexity of whole scene to a great extent.For example the multilevel hierarchy details window model of prior art is made of 156 triangles, the common modeling technique of the uppermost window rib of window is described it with 12 triangles (6 rectangle planes), because the plane on top, both sides overlaps with the perpendicular of side, be not only redundant data, and can cause the contention of Z value, should be in advance with deletion.So whole window is carried out one time and delete, the window of same visual effect then can be made of 104 triangles.Consider the data redundancy that model integration produces again, uppermost faceted pebble will overlap with metope, can delete that also the rest may be inferred, and final window model is as long as just can intactly describe its how much topology information with 96 triangles.Compare with the model that 156 triangles are represented, its visual effect has but promoted nearly 40% system performance without any loss.
The texture technology is the solid object surface details performance technology true to nature that computer graphics application extensively adopts, and also is the effective ways of control scene complexity, accelerated graphics render speed.It is mapped to the 3D solid surface with the 2 d texture image of a rectangular array definition, perhaps revises the light distribution of solid object surface by a process.
Generating the conventional method of texture, is to be pre-defined textured pattern on the texture space at a plane domain, sets up between the point of the point of body surface and texture space then and shines upon.After the visible point of body surface was determined, on duty with brightness value with the corresponding point of texture space just can be attached to textured pattern on the body surface.Can produce rough outward appearance to body surface with similar method, perhaps claim raised grain.But at this moment texture value acts on the normal vector, rather than in the colour brightness.No matter be to generate color and vein or raised grain, generally only require with true pattern and roughly approach, needn't do accurate simulation, so that, increase the sense of reality of solid object surface greatly significantly not increasing under the prerequisite of calculated amount.
The texture technology can reduce scene complexity significantly, but the method also has certain limitation.As ramber during near solid model, the entity details of texture performance will lack the sense of reality.If the multilevel hierarchy detail model is used in combination with the texture technology, can when guaranteeing the entity details sense of reality, reduce the complexity of scene real-time drafting effectively.
The multilevel hierarchy detail model is meant a group model that uses describing method with different details to obtain to the entity in same scene or the scene, selects use when drawing.Because virtual reality is described geometry entity in the scene with polygonal mesh usually, thereby the multilevel hierarchy detail model adopts the grid of different complicated and simple degree to realize real-time rendering to the different meticulous levels of virtual scene.Generally, the multilevel hierarchy detail model uses more polygon to set up the description of the details of entity, and the profile description of setting up entity with less polygon.When the roaming viewpoint is nearer apart from entity, select careful model, select skeleton pattern when the viewpoint distance is far away for use.General roaming system has all adopted three grades of multilevel hierarchy detail model technology, window with foregoing description is an example, senior multilevel hierarchy detail model is benchmark with the design data, all adopt the method for Geometric Modeling to provide entity description, middle rank multilevel hierarchy detail model then replaces the window part with texture, but windowsill still adopts geometric model, rudimentary multilevel hierarchy detail model with whole window as a texture plane.
Roaming engine is responsible for realizing control behavior parsing and the scene management and running based on the scene dispatch control method of model chain of command.Owing to stored the controlling object collection in the roaming system, wherein provided opening the object set that promptly will load and closing the object set that promptly will unload of a certain model chain of command, enforcement behavior parsing and scene are dispatched the simple of the ten minutes that becomes.
After viewpoint and model chain of command bumped and return controlled variable, behavior was resolved as long as locate the controlling object collection of this model chain of command correspondence simply.Then, for some model chains of command, the object in the object set is unloaded one by one, loads by following method.
As shown in Figure 7, roaming system must carry out the collision detection of various inter-entity in viewpoint and the virtual environment, and then provide rational collision response in the process of viewpoint control.Collision detection algorithm is contained among the camera motion control module, in case the collision between viewpoint and pseudo-entity has taken place, input equipment provides advance or retreat instruction will be left in the basket, and promptly camera no longer moves forwards or backwards, and this is the simple response to collision that provides among the present invention.When terrain match typically refers to transaction in the virtual environment such as surface car motion attitude everywhere form height lowly rise and fall, left and right sides deflection.In roaming system, ramber control be " incarnation " of human eye, i.e. viewpoint is so terrain match promptly refers to the height of viewpoint and line of vision deformationization and changing everywhere.
In collision detection algorithm, can see terrain match module invoked opportunity.Any outside input that viewpoint position is changed all will cause calling of collision detection module, bump in case the result of collision detection is shown as, and collision response just shows as the position change of ignoring viewpoint, and viewpoint is parked in as you were.If the result of collision detection is not collision generation, viewpoint arrives a new position with regard to step-length and direction in accordance with regulations, but this viewpoint position and line of vision are not to be the final position that viewpoint is made a response to the outside input, and final position will be determined after terrain match.
Camera advances on the line of vision direction after the step distance, its residing position be designated as WPOS (xw, yw, zw).Inevitable on the Yv axle of eye coordinates system through the camera position after the terrain match.WPOS sets up vertical rectilinear direction through point, and promptly the direction of Yv axle utilizes sight line and space polygon collection to ask the method for friendship, obtains the plane and the intersecting point coordinate that intersect with the Yv axle.Relatively the Y component is no more than a certain set-point and a maximum intersecting plane is defined as topographical surface in the intersection point.The walking camera is placed on this surperficial Y value height.The height value of the observing camera camera that is set to walk adds a fixed value eye_height-walker_height.The terrain match problem of line of vision is relatively more simple.Get the polygonal direction of topographical surface, the direction that camera is observed in order is consistent with the topographical surface direction, but keeps walking camera line of vision constant.
Two-dimensional map is a widely used roaming navigational aids in the roaming system.Compare with the three-dimensional scenic view, the advantage of two-dimensional map is that it can provide wide visual field space more, is convenient to the ramber and holds present located position and surrounding enviroment situation on the whole.
Generally speaking, exploitation two-dimensional map display module need extract two-dimentional essential characteristic in modeling space, as road, building size data, using the OPENGL graphics system to carry out lines draws, again polygonal region is filled corresponding color, also to control a viewpoint pointer, the motion that is consistent with viewpoint in the three-dimensional scenic.
As shown in Figure 8, the present invention has used another simpler and more direct two-dimensional map development approach, utilizes the three-dimensional scenic orthogonal projection to generate two-dimensional map.This method to a plane, is used the camera resource with three-dimensional scene models " compression " according to the principle of computer graphics orthogonal projection then, realizes being synchronized with the movement of demonstration, convergent-divergent and two dimension and 3d viewpoint of map.Used the hyperchannel programmatics when realizing two-dimensional map guide, promptly set up channel pattern as shown in Figure 8, mainchannel among the figure (left passage) also is the main channel, is used to show the three-dimensional scenic of roaming system, and channel2D (right passage) is in order to show two-dimensional map.The main channel occupies whole screen, and the two-dimensional map passage only occupies the eighth zone, the upper right corner of screen.In channel2D, not only be provided with camera, also preset illumination and observed body, just the camera lens of camera always faces the model plane of expression two-dimensional map.Certainly, any system can not will represent that all the model of three-dimensional world all calls among the channel2D in order to realize two-dimensional map to show fast, easily, that will cause the double increase of system loading, lose more than gain fully, but with three-dimensional model simplifying to having only face of land data (be about to all entity part Delete Alls that are above the ground level), after simple the repairing, form the three dimensional representation model of two-dimensional map.After having set up hyperchannel mechanism and two-dimensional map representation model, the viewpoint of two-dimensional map and three-dimensional scenic indication stationary problem is converted to camera among the channel2D and follows the tracks of walking camera problem among the mainchannel, and the lens direction of the camera in tracing process among the channel2D remains unchanged.The zoom function of two-dimensional map is realized by different focal lengths is set for camera.
It is frame frequency that virtual reality applications is not only pursued high interactive simulation rate, and wish frame frequency be consistent, constant, at least should remain on a certain interval of user expectation, especially for the virtual reality applications of war simulation type, the frame frequency that jumps is the analog result that leads to errors, and makes the participant feel at a loss as to what to do.Frame frequency is an inverse of drawing the used time of frame scene, therefore, obtain constant frame frequency, must draw the polygonal quantity of scene surface by control one frame.
1996, Thomas Buddhist of University of North Carolina can person of outstanding talent this and Taylor Se Si proposes effectively a kind of in the SodaHall roaming system but the method for complicated constant frame frequency comparatively.(R), wherein, O represents an object in the scene for O, L, and L represents a LOD model class of this object, and R represents drafting that this object is imposed for each object in the scene database is set up tlv triple for they Σ s Cost ( O , L , R ) ≤ T arg etFrameTime And Σ s Benefit ( O , L , R ) Maximum algorithm.On the object tlv triple, set up two evaluation function Cost (O, L, R) and Benefit (O, L, R), Cost (O, L, the drafting time of R algorithm when R) being used for calculating object and selecting L level LOD for use, Benefit (O, L R) is used to estimate under the multilevel hierarchy details of L level and the R algorithm condition this object to the contribution of the visual aspects of whole scene.Like this, the problem of constant frame frequency is converted into uses a thread to be responsible for the calculating of above-mentioned two functions specially in the algorithm, determined multilevel hierarchy level of detail and the rendering algorithm selected for use before each frame is drawn.
The constant frame frequency method of University of North Carolina is a kind of efficiently based on the optimized Algorithm of looking ahead, and applicable to most of virtual reality applications, but the calculating in this algorithm mainly relies on subjective criterion, and whole algorithm is comparatively complicated.
Atomizing not only can improve the sense of reality of scene, can also improve the overall performance of virtual reality system.The essence of atomizing be with the color of the original color of scene and mist according to a certain percentage the factor merge, the value of scale factor is relevant with the model that atomizes.Mostly provide three kinds of atomizing models with the figure accelerating engine of OPENGL compatibility:
Wherein f is a scale factor, and density is the density of mist, z be under the view coordinate viewpoint to the distance of scene dough sheet central point, f = e - ( density · z ) ( GL _ EXP ) f = e - ( density · z ) 2 ( GL _ EXP 2 ) f = end - z end - start ( GL _ LINEAR ) Start and end be the atomizing degree of depth open initial value and stop value.Under the RGBA color model, the color C on the viewing plane after each picture element atomizing is calculated by following formula:
C=fC i+ (1-f) C fC IBe the RGBA color value of scene dough sheet, C fBe the color of mist, f is a scale factor discussed above.
A critical nature of atomizing that above-mentioned atomizing model is given, that is, viewpoint and physical distance are far away more, and the degree of scene atomizing is big more, and the visibility of entity in scene is more little.Batten conversion atomizing model is comparatively remarkable to the atomizing effect of scene, and after the distance of viewpoint and entity surpassed a set-point, the color after the entity atomizing was all replaced by the color of mist, does not need to do real-time rendering again.
Adjust every frame scene in analysis above comprehensive, the present invention have designed and reached among a small circle by adjustment atomizing model parameter and show the average time spent, and then reach the method for the constant purpose of interactive simulation rate.
Consult Fig. 9, the present invention has adopted a kind of simple but highly effective constant frame frequency technology, i.e. the constant frame frequency method of adjusting based on the atomizing visibility.This method can be thought a kind of method of adjustment afterwards.So-called afterwards the adjustment be meant according to the frame frequency of front in a period of time and dynamically set atomizing effect with the difference of target frame frequency, makes the increase of solid object surface polygon quantity or the minimizing of every frame processing, thereby reach the target of adjustment frame frequency lifting.Concrete algorithm can be with flowcharting shown in Figure 9.Therefrom as can be seen, the present invention realizes the algorithm of atomizing effect, be with current frame frequency to the mode that the target frame frequency progressively approaches, make near the frame frequency " swing " the target frame frequency of roaming system, thereby reach the purpose of constant frame frequency in a fixed interval.This algorithm is applicable to the virtual reality applications of zone than large scene.
Table 4 has provided the auspicious table of general roaming engine system's keypad function of the embodiment of the invention.Just can develop the general roaming driving computer system of using reality environment of the present invention according to above-mentioned steps.
State Control The hot key definition Default setting
Atomizing effect is enabled/is forbidden CTRL+Z Initiate mode
Two-dimensional map guide is enabled/is closed M Closed condition
Outdoor scene is visible/and invisible P Visible state
The proal collision detection of camera is enabled/is forbidden CTRL+G Enable collision checking function
The collision detection that camera moves is backward enabled/is forbidden CTRL+R Enable collision checking function
The texture opening/closing CTRL+X Open mode
Wire frame shows opening/closing CTRL+P Closed condition
Frame number statistical result showed/do not show TAB+‘-’ Forbid show state
Debugging message output allows/forbids CTRL+T Illegal state
The DOF entity handles automatically and allows/forbids O Enable state
Transparent algorithm is switched selection F8,F9 The AF state
Two-dimensional map camera zoom is selected F1-F5 The maximum focal length state
Frame buffer mode list/both sides' formula is switched CTRL+E The single frames cache way
Cursor of mouse show/does not show CTRL+B Show state
Observe permission that the camera head comes back/bow and forbid CTRL+H Enable state
Table 1
Key definition Type of sports step θ α
UPARROW?KEY FORWARD 0.7 n/a n/a
DOWNARROW?KEY BACKWARD 0.5 n/a n/a
leftARROW?KEY TURN_left n/a 0.3 n/a
rightARROW?KEY TURN_right n/a 0.3 n/a
U?KEY LOOK_UP n/a n/a 0.15
J?KEY LOOK_DOWN n/a n/a 0.15
Table 2
The mouse region The camera motion type ????step ????θ ????α
Zone 1 ???FORWARD ????speed ????n/a ????n/a
Zone 2 ???BACKWARD ????speed ????n/a ????n/a
Zone 3 ???FORWARD,TURN_right ????speed ????rot ????n/a
Zone 4 ???BACKWARD,TURN_right ????speed ????rot ????n/a
Zone 5 ???TURN_right ????n/a ????rot ????n/a
Zone 6 ???TURN_right ????n/a ????rot ????n/a
Zone 7 ???FORWARD,TURN_left ????speed ????rot ????n/a
Zone 8 ???BACKWARD,TURN_left ????speed ????rot ????n/a
Zone 9 ???TURN_left ????n/a ????rot ????n/a
Zone 10 ???TURN_left ????n/a ????rot ????n/a
Table 3
The hot key definition Functional description The hot key definition Functional description
↑ up-arrow key Move forward viewpoint f The mistiness degree reduces
↓ following cursor key Moving view point backward F The mistiness degree increases
← left cursor key Viewpoint turns left O The setting of automatic opening/closing door and window function
→ right cursor key Viewpoint turns right M The two-dimensional map ON/OFF
U Come back P The outdoor scene ON/OFF
J Bow T Debugging message output ON/OFF
L The helicopter viewpoint descends r Reduce the red composition in the material
H The helicopter viewpoint rises R Increase the red composition in the material
PageUp The helicopter viewpoint is set g Reduce the green composition in the material
PageDown The helicopter viewpoint resets G Increase the green composition in the material
Home To certain building door b Reduce the blue composition in the material
End To certain specified point B Increase the blue composition in the material
F1 Two-dimensional map camera zoom C The ON/OFF of cloud layer
F2 Two-dimensional map camera zoom CTRL+G Collision detection ON/OFF forward
F3 Two-dimensional map camera zoom CTRL+R Collision detection ON/OFF backward
F4 Two-dimensional map camera zoom CTRL+B Mouse pointer show open/also
F5 Two-dimensional map camera zoom CTRL+X Texture is drawn ON/OFF
F7 Transparent algorithm one is selected CTRL+I The statistical information ON/OFF
F8 Transparent algorithm two is selected CTRL+C Camera control ON/OFF
F9 Atomizing concentration is provided with CTRL+H Observe the camera ON/OFF
1 Camera is placed on certain building one deck CTRL+P Wire frame is drawn ON/OFF
2 Camera is placed on two layers of certain buildings CTRL+L The amplification of pseudo-entity
3 Camera is placed on three layers of certain buildings CTRL+S Dwindling of pseudo-entity
4 Camera is placed on four layers of certain buildings CTRL+M Being rotated in the forward of pseudo-entity
5 Camera is placed on five layers of certain buildings CTRL+N The negative sense rotation of pseudo-entity
a Increase the ambient value of daylight CTRL+Z The atomizing ON/OFF
A Reduce the ambient value of daylight CTRL+T Texture is replaced function key
d Increase the diffuse value of daylight ‘-‘ Detail statistics information shows ON/OFF
D Reduce the diffuse value of daylight
s The sun came up
S The sun falls
Table 4

Claims (10)

1, a kind of universal virtual environment roaming engine computer system, be made of personal computer, graphics acceleration card, walker, stereoscopic vision demonstration and tracking equipment, scene database and roaming engine core component, this roaming engine computer system implementation method may further comprise the steps:
(a) create general virtual reality applications resource;
(b) load the contextual data library file that meets the scene description standard;
(c) customization roaming state mechanism;
It is characterized in that also comprising:
(d) input mapping and explanation facility and all accept once outside the input at each frame of simulation cycles is set, supports multiple input equipment access roaming system;
(e) carry out viewpoint control, manipulations of physical and state setting according to outside input command, implementing the several scenes complexity in the viewpoint control procedure subdues strategy, carries out collision detection and terrain match, support use standard 2-d input device to select and handle the object in the three-dimensional scenic, multiple roaming state of a control and environment special efficacy are set, satisfy requirements of different users.
2, universal virtual environment roaming engine computer system according to claim 1, it is characterized in that: the viewpoint control that described viewpoint control is to use the double camera model to roam, create and observe view camera and walking view camera, observe view camera and be subject to the walking view camera, the walking camera is in order to realize the multiple spot collision in the roam procedure and to keep direct of travel that observing camera has three rotational freedoms.
3, universal virtual environment roaming engine computer system according to claim 1 is characterized in that described viewpoint control further comprises:
(a) in the motion of view camera and walking camera as detect collision, then judge the character of collision;
(b) if the scene chain of command then starts the scene scheduling strategy, carry out the management and running of indoor and outdoor scene;
(c) if steerable movement entity then carries out the motion control of entity in the scene by moving coordinate system that defines in the model and kinematic parameter;
(d) if confirm collision has taken place, then provide collision response.
4, universal virtual environment roaming engine computer system according to claim 1 is characterized in that step (d) further comprises:
(a) input mapping and explanation facility are in order to roaming engine and input equipment are isolated, to reduce the influence of input equipment to the roaming system kernel to greatest extent;
(b) from viewpoint control and functions of the equipments, take out a middle layer;
(c) control with input equipment is mapped to this middle layer and makes it become the Controlling Source that viewpoint drives.
5, universal virtual environment roaming engine computer system according to claim 1 is characterized in that: in whole roam procedure, support several scenes height and complexity to subdue strategy simultaneously, may further comprise the steps:
(a) replace by the precision of the multi-level detail model of distance scheduling of observing view camera and each model of place;
(b) determine the scene chain of command with three dimensions plane in the scene database and behavior parameter, carrying out the scheduling of indoor and outdoor scene and complex indoor scene divides, in roam procedure, after detecting the model chain of command based on the collision detection algorithm of sight line, the control behavior identification (RNC-ID) analytic that defines during with modeling comes out, and is mapped to the scene controlling object and concentrates, according to the concentrated list object of controlling object, load viewable objects, unload invisible object, realize the scene management and running;
Automatically subdue the redundant polygon in the solid model when (c) model loads;
(d) texture of generation solid object surface.
6, universal virtual environment roaming engine computer system according to claim 1 is characterized in that: described collision detection further comprises with response:
(a) reason of analysis camera motion is set up the walking camera and is observed camera collision detection line segment forward;
(b) observe the camera collision detection;
(c) walking camera collision detection if bump, is then carried out the scheduling of virtual environment scene and is handled;
(d) if do not bump, then camera advances, and carries out terrain match.
7, according to the universal virtual environment roaming engine computer system of claim 1, it is characterized in that: described terrain match technology further may further comprise the steps:
(a) the walking camera advances;
(b), call the collision detection module if viewpoint position changes;
(c) the collision detection result then ignores viewpoint position and changes for bumping, and walking camera and observation camera rising certain altitude are provided with the line of vision of observing camera by the topographical surface direction simultaneously;
(d) if the result is not for bumping, then viewpoint arrives a reposition.
8, universal virtual environment roaming engine computer system according to claim 1 is characterized in that: state is provided with function and comprises ON/OFF atomizing effect, ON/OFF two-dimensional map guide, ON/OFF collision detection, selects the transparent processing mode, one or more of weather conditions etc. are set.
9, universal virtual environment roaming engine computer system according to claim 1 is characterized in that: affiliated environment special efficacy comprises with the atomizing step of the constant frame frequency method that is technical support.
10, universal virtual environment roaming engine computer system according to claim 9 is characterized in that: described constant frame frequency method further comprises:
(a) take out that every frame on average consumes the time in preceding 2 seconds;
(b) calculate current frame frequency;
(c) calculate the poor of current frame frequency and target frame frequency;
(d) if current frame frequency greater than the target frame frequency then increase the atomizing visibility, otherwise reduces the visibility that atomizes;
(e) new atomization parameter and end are set.
CNB021307369A 2002-11-13 2002-11-13 Universal virtual environment roaming engine computer system Expired - Fee Related CN100428218C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB021307369A CN100428218C (en) 2002-11-13 2002-11-13 Universal virtual environment roaming engine computer system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB021307369A CN100428218C (en) 2002-11-13 2002-11-13 Universal virtual environment roaming engine computer system

Publications (2)

Publication Number Publication Date
CN1414496A true CN1414496A (en) 2003-04-30
CN100428218C CN100428218C (en) 2008-10-22

Family

ID=4746450

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB021307369A Expired - Fee Related CN100428218C (en) 2002-11-13 2002-11-13 Universal virtual environment roaming engine computer system

Country Status (1)

Country Link
CN (1) CN100428218C (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100400021C (en) * 2005-07-19 2008-07-09 天津大学 Arrangement for monitoring walk-assisting kinetic parameters of walking device
CN100440142C (en) * 2005-10-27 2008-12-03 三星电子株式会社 Three-dimensional motion graphical user interface and apparatus and method of providing same
CN100459508C (en) * 2005-12-12 2009-02-04 腾讯科技(深圳)有限公司 Internet fluid media interdynamic system and fluid media broadcasting method
CN100465840C (en) * 2004-07-07 2009-03-04 西门子公司 Method for simulating a technical installation
CN101816613A (en) * 2010-06-07 2010-09-01 天津大学 Ant-colony calibrating precise force-measuring walking aid device
CN101031866B (en) * 2004-05-28 2010-11-10 新加坡国立大学 Interactive system and method
CN101055494B (en) * 2006-04-13 2011-03-16 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN101615305B (en) * 2009-07-24 2011-07-20 腾讯科技(深圳)有限公司 Method and device for detecting collision
CN102332179A (en) * 2010-09-20 2012-01-25 董福田 Three-dimensional model data simplification and progressive transmission methods and devices
CN102385762A (en) * 2011-10-20 2012-03-21 上海交通大学 Modelica integrated three-dimensional scene simulation system
CN102483856A (en) * 2009-06-25 2012-05-30 三星电子株式会社 Virtual world processing device and method
CN102520950A (en) * 2011-12-12 2012-06-27 广州市凡拓数码科技有限公司 Method for demonstrating scene
CN101702245B (en) * 2009-11-03 2012-09-19 北京大学 Extensible universal three-dimensional terrain simulation system
CN103543754A (en) * 2013-10-17 2014-01-29 广东威创视讯科技股份有限公司 Camera control method and device in three-dimensional GIS (geographic information system) roaming
CN103606194A (en) * 2013-11-01 2014-02-26 中国人民解放军信息工程大学 Space, heaven and earth integration situation expression engine and classification and grading target browsing method thereof
CN103810559A (en) * 2013-10-18 2014-05-21 中国石油化工股份有限公司 Risk-assessment-based delay coking device chemical poison occupational hazard virtual reality management method
CN104076915A (en) * 2013-03-29 2014-10-01 英业达科技有限公司 Exhibition system capable of adjusting three-dimensional models according to sight lines of visitors and method implemented by exhibition system
CN104346368A (en) * 2013-07-30 2015-02-11 腾讯科技(深圳)有限公司 Indoor scene switch displaying method and device and mobile terminal
CN105474271A (en) * 2014-02-13 2016-04-06 吉欧技术研究所股份有限公司 Three-dimensional map display system
CN105824690A (en) * 2016-04-29 2016-08-03 乐视控股(北京)有限公司 Virtual-reality terminal, temperature adjusting method and temperature adjusting device
CN101630402B (en) * 2008-07-14 2017-06-16 苏州远唯网络技术服务有限公司 A kind of tree-dimensional animation engine for ecommerce
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment
CN107450747A (en) * 2017-07-25 2017-12-08 腾讯科技(深圳)有限公司 The displacement control method and device of virtual role
CN108762209A (en) * 2018-05-25 2018-11-06 西安电子科技大学 Production Line Configured's analogue system based on mixed reality and method
CN108960947A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Show house methods of exhibiting and system based on virtual reality
CN109785424A (en) * 2018-12-11 2019-05-21 成都四方伟业软件股份有限公司 A kind of three-dimensional asynchronous model particle edges processing method
CN110245445A (en) * 2019-06-21 2019-09-17 浙江城建规划设计院有限公司 A kind of ecology garden landscape design method based on Computerized three-dimensional scenario simulation
CN110880204A (en) * 2019-11-21 2020-03-13 腾讯科技(深圳)有限公司 Virtual vegetation display method and device, computer equipment and storage medium
CN111243070A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Virtual reality presenting method, system and device based on 5G communication
CN111583403A (en) * 2020-04-28 2020-08-25 浙江科澜信息技术有限公司 Three-dimensional roaming mode creating method, device, equipment and medium
CN112907618A (en) * 2021-02-09 2021-06-04 深圳市普汇智联科技有限公司 Multi-target sphere motion trajectory tracking method and system based on rigid body collision characteristics
CN115857702A (en) * 2023-02-28 2023-03-28 北京国星创图科技有限公司 Scene roaming and view angle conversion method in space scene

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU5019896A (en) * 1995-01-11 1996-07-31 Christopher D Shaw Tactile interface system
AU3954997A (en) * 1996-08-14 1998-03-06 Nurakhmed Nurislamovich Latypov Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject, and systems for implementing said methods

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101031866B (en) * 2004-05-28 2010-11-10 新加坡国立大学 Interactive system and method
CN100465840C (en) * 2004-07-07 2009-03-04 西门子公司 Method for simulating a technical installation
CN100400021C (en) * 2005-07-19 2008-07-09 天津大学 Arrangement for monitoring walk-assisting kinetic parameters of walking device
CN100440142C (en) * 2005-10-27 2008-12-03 三星电子株式会社 Three-dimensional motion graphical user interface and apparatus and method of providing same
CN100459508C (en) * 2005-12-12 2009-02-04 腾讯科技(深圳)有限公司 Internet fluid media interdynamic system and fluid media broadcasting method
CN101055494B (en) * 2006-04-13 2011-03-16 上海虚拟谷数码科技有限公司 Dummy scene roaming method and system based on spatial index cube panoramic video
CN101630402B (en) * 2008-07-14 2017-06-16 苏州远唯网络技术服务有限公司 A kind of tree-dimensional animation engine for ecommerce
CN102483856A (en) * 2009-06-25 2012-05-30 三星电子株式会社 Virtual world processing device and method
CN101615305B (en) * 2009-07-24 2011-07-20 腾讯科技(深圳)有限公司 Method and device for detecting collision
CN101702245B (en) * 2009-11-03 2012-09-19 北京大学 Extensible universal three-dimensional terrain simulation system
CN101816613B (en) * 2010-06-07 2011-08-31 天津大学 Ant-colony calibrating precise force-measuring walking aid device
CN101816613A (en) * 2010-06-07 2010-09-01 天津大学 Ant-colony calibrating precise force-measuring walking aid device
CN102332179A (en) * 2010-09-20 2012-01-25 董福田 Three-dimensional model data simplification and progressive transmission methods and devices
CN102385762A (en) * 2011-10-20 2012-03-21 上海交通大学 Modelica integrated three-dimensional scene simulation system
CN102520950A (en) * 2011-12-12 2012-06-27 广州市凡拓数码科技有限公司 Method for demonstrating scene
CN104076915A (en) * 2013-03-29 2014-10-01 英业达科技有限公司 Exhibition system capable of adjusting three-dimensional models according to sight lines of visitors and method implemented by exhibition system
CN104346368A (en) * 2013-07-30 2015-02-11 腾讯科技(深圳)有限公司 Indoor scene switch displaying method and device and mobile terminal
CN103543754A (en) * 2013-10-17 2014-01-29 广东威创视讯科技股份有限公司 Camera control method and device in three-dimensional GIS (geographic information system) roaming
CN103810559A (en) * 2013-10-18 2014-05-21 中国石油化工股份有限公司 Risk-assessment-based delay coking device chemical poison occupational hazard virtual reality management method
CN103606194A (en) * 2013-11-01 2014-02-26 中国人民解放军信息工程大学 Space, heaven and earth integration situation expression engine and classification and grading target browsing method thereof
CN103606194B (en) * 2013-11-01 2017-02-15 中国人民解放军信息工程大学 Space, heaven and earth integration situation expression engine and classification and grading target browsing method thereof
CN105474271A (en) * 2014-02-13 2016-04-06 吉欧技术研究所股份有限公司 Three-dimensional map display system
CN105474271B (en) * 2014-02-13 2018-10-02 吉欧技术研究所股份有限公司 Relief map display system
CN105824690A (en) * 2016-04-29 2016-08-03 乐视控股(北京)有限公司 Virtual-reality terminal, temperature adjusting method and temperature adjusting device
CN106910236A (en) * 2017-01-22 2017-06-30 北京微视酷科技有限责任公司 Rendering indication method and device in a kind of three-dimensional virtual environment
CN108960947A (en) * 2017-05-19 2018-12-07 深圳市掌网科技股份有限公司 Show house methods of exhibiting and system based on virtual reality
US11049329B2 (en) 2017-07-25 2021-06-29 Tencent Technology (Shenzhen) Company Limited Method and apparatus for controlling placement of virtual character and storage medium
CN107450747A (en) * 2017-07-25 2017-12-08 腾讯科技(深圳)有限公司 The displacement control method and device of virtual role
US11527052B2 (en) 2017-07-25 2022-12-13 Tencent Technology (Shenzhen) Company Limited Method and apparatus for controlling placement of virtual character and storage medium
CN108762209A (en) * 2018-05-25 2018-11-06 西安电子科技大学 Production Line Configured's analogue system based on mixed reality and method
CN109785424A (en) * 2018-12-11 2019-05-21 成都四方伟业软件股份有限公司 A kind of three-dimensional asynchronous model particle edges processing method
CN110245445A (en) * 2019-06-21 2019-09-17 浙江城建规划设计院有限公司 A kind of ecology garden landscape design method based on Computerized three-dimensional scenario simulation
CN110880204A (en) * 2019-11-21 2020-03-13 腾讯科技(深圳)有限公司 Virtual vegetation display method and device, computer equipment and storage medium
CN111243070B (en) * 2019-12-31 2023-03-24 浙江省邮电工程建设有限公司 Virtual reality presenting method, system and device based on 5G communication
CN111243070A (en) * 2019-12-31 2020-06-05 浙江省邮电工程建设有限公司 Virtual reality presenting method, system and device based on 5G communication
CN111583403A (en) * 2020-04-28 2020-08-25 浙江科澜信息技术有限公司 Three-dimensional roaming mode creating method, device, equipment and medium
CN111583403B (en) * 2020-04-28 2023-06-09 浙江科澜信息技术有限公司 Three-dimensional roaming mode creation method, device, equipment and medium
CN112907618A (en) * 2021-02-09 2021-06-04 深圳市普汇智联科技有限公司 Multi-target sphere motion trajectory tracking method and system based on rigid body collision characteristics
CN112907618B (en) * 2021-02-09 2023-12-08 深圳市普汇智联科技有限公司 Multi-target sphere motion trail tracking method and system based on rigid body collision characteristics
CN115857702A (en) * 2023-02-28 2023-03-28 北京国星创图科技有限公司 Scene roaming and view angle conversion method in space scene
CN115857702B (en) * 2023-02-28 2024-02-02 北京国星创图科技有限公司 Scene roaming and visual angle conversion method under space scene

Also Published As

Publication number Publication date
CN100428218C (en) 2008-10-22

Similar Documents

Publication Publication Date Title
CN100428218C (en) Universal virtual environment roaming engine computer system
CN107103638B (en) Rapid rendering method of virtual scene and model
Airey et al. Towards image realism with interactive update rates in complex virtual building environments
Pettré et al. Real‐time navigating crowds: scalable simulation and rendering
US6271842B1 (en) Navigation via environmental objects in three-dimensional workspace interactive displays
CN108074274A (en) BIM model real-time rendering method and devices based on browser
Liu Three-dimensional visualized urban landscape planning and design based on virtual reality technology
CN1118777C (en) Method for controlling level of detail displayed in computer generated screen display of complex structure
CN105654545A (en) Construction and hierarchical display control method for 3D interactive villa type
JPH0757117A (en) Forming method of index to texture map and computer control display system
Larive et al. Wall grammar for building generation
CN110992510A (en) Security scene VR-based automatic night patrol inspection method and system
Agnello et al. Virtual reality for historical architecture
US6222554B1 (en) Navigation in three-dimensional workspace interactive displays having virtual force fields associated with selected objects
CN104914993A (en) Experience type design method for controlling civil aircraft passenger cabin seat adjustment by gestures
Sareika et al. Urban sketcher: Mixed reality on site for urban planning and architecture
JP4525362B2 (en) Residential 3D CG system
CN1753030A (en) Human machine interactive frame, faced to three dimensional model construction
CN106530390A (en) Intelligent regional landscape element material library combination system and method
CN115984467A (en) Multi-room indoor house type graph reconstruction method based on unsupervised learning
CN102800116B (en) Method for rapidly creating large-scale virtual crowd
CN101702243B (en) Group movement implementation method based on key formation constraint and system thereof
McIlveen et al. PED: Pedestrian Environment Designer.
Greenberg An interdisciplinary laboratory for graphics research and applications
Drettakis et al. Image-based techniques for the creation and display of photorealistic interactive virtual environments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20081022

Termination date: 20111113