1. Introduction

1.1 Compiling and Linking

2. Writing and Reading HOOPS Stream Files

2.1 Writing

2.1.1 Overview
2.1.2 Multi-Purpose Opcode Handlers
2.1.3 Compression
2.1.4 Using the TKE_View opcode
2.1.5 Referencing External Data Sources
2.1.6 Controlling the Quality of the Streaming Process
2.1.7 Creating an HSF with LODs
2.1.8 Writing Examples
2.1.9 Writing Options

2.2 Reading

2.3 Controlling the Reading and Writing Process

2.3.1 Overview
2.3.2 Controlling Reading
2.3.3 Controlling Writing

2.4 Verifying HSF files

2.5 HOOPS/3dGS Classes


3. Streaming an HSF File

3.1 Basic Streaming

3.2 Performing Streaming on a Separate Thread
 

4. Customizing the HOOPS Stream File

4.1 Customizing HSF Objects

4.2 Versioning and Storing Additional User Data

4.3 Tagging HSF Objects to Associate User Data 
 

5. Maximizing Performance

5.1 Rendering

5.1.1 Scene-graph organization
5.1.2 Shell organization
5.1.3 Polygon handedness


1. Introduction

This guide explains how to use the HOOPS/Stream base classes to directly export/import data to/from an HSF file.  These classes are intended to be used to export an HSF file when the graphics information is not stored in the HOOPS 3D Graphics System scene-graph, or to import an HSF file when the graphics information is not going to be mapped to the HOOPS/3dGS scene-graph.   This is useful for developers who already have their own graphics subsystem during the export phase, the import phase, or both. 
If the data to be exported to an HSF file already resides in the HOOPS/3dGS scene-graph, or if data being imported from an HSF file needs to be mapped to the HOOPS/3dGS scene-graph, then the 3dGS-specific classes should be used.  The 3dGS-specific classes and the full HOOPS 3D Graphics System are part of the HOOPS 3D Application Framework (HOOPS/3dAF).
The following are prerequisites to creating or importing an HSF file using the base classes:
1.   Because an HSF file is essentially an archive of the HOOPS/3dGS scene-graph contents (geometry, attributes, and segments) it is important to understand how the scene-graph should be organized, and what geometry and attributes are supported. Refer to the HOOPS/3dGS Programming Guide for information on scene-graph architecture/usage and details on supported geometry and attributes.  The HOOPS/3dGS Reference Manual provides specific details on geometry types and multi-option attributes. (The HOOPS/3dGS Reference Manual is located in the HOOPS/3dGS section of the documentation.)
2.  An understanding of HSF opcodes/objects, opcode handlers, and general HOOPS/Stream Toolkit architecture.  This information is reviewed in the HOOPS/Stream Technical Overview
.

1.1 Compiling and Linking

The HOOPS/Stream base-class headers, source code, export library (for Windows) and libraries/dlls are located in the /dev_tools/base_stream subdirectories.   The /dev_tools/base_stream/source directory includes the Microsoft DevStudio Project files or Unix makefile to enable rebuilding. 
 
 
 


2. Reading and Writing HOOPS Stream Files

2.1 Writing

2.1.1 Overview
 

As reviewed in the HSF File Architecture document, an HSF file must have the following structure:
<TKE_Comment>               Required - contents denote file version
<opcode for a scenegraph object>
               .
               . 
               .
<opcode for a scenegraph object>
<TKE_Termination>           Required
This means that the first opcode exported to the file must be TKE_Comment with contents that are specifically formatted to contain the file version (the TK_Header class manages this automatically, and is discussed later).  The last opcode exported must be TKE_Termination.
To create an HSF file, you must first create a  BStreamFileToolkit object, and then manually create and initialize opcode-handlers (or custom objects derived from them) and export their contents. 
Opcode handlers are derived from BBaseOpcodeHandler.  This is an abstract class used as a base for derived classes which manage logical pieces of binary information.  BBaseOpcodeHandler provides virtual methods which are implemented by derived classes to handle reading, writing, execution and interpretation of binary information.  (The methods are called Read, Write, Execute and Interpret.)  Execution refers to the process of populating application specific data structures with the binary information that has been read from a file or user-provided buffer within the Read method.  Interpretation refers to the process of extracting application specific data to prepare it for subsequent writing to a file or user-provided buffer within the Write method.
 
Naming Conventions
Naming conventions for opcodes and opcode handlers are as follows:
HSF file opcodes                -    TKE_<opcode>
opcode handler classes     -    TK_<object-type>

Initializing Opcodes
During file writing, you must first access the graphical and user data that you wish to export and initialize the opcode's data structures.  You could do this initialization work in the Interpret method of the opcode handler and call that method prior to exporting the opcode to the file.  The opcode handler could also be directly initialized via the public interface after construction. 
 
Exporting Opcodes
After the 'interpretation/initialization' phase is complete, you must call the Write method of the opcode handler until writing of the current opcode is complete.  This will export the opcode data to an accumulation buffer (of user-specified size) that must initially be passed to the toolkit.  This buffer can then be exported to an HSF file or utilized directly.   The sample code discussed later in this section contains a reusable 'WriteObject' function that encapsulates the work necessary to export an object to a file; with minor modification, it could be used to export an object to application data-structures, over a network, etc...
 
Resetting Opcodes
If an opcode handler object is going to be reused to deal with another chunk of data, then the BBaseOpcodeHandler::Reset method should be called.  This reinitializes opcode handler variables and frees up temporary data. 
 

2.1.2 Multi-Purpose and Utility Opcode Handlers

Some opcode-handlers can be used to process more than one opcode; when using these objects, the desired opcode must be passed into the opcode handler's constructor.  (To find out which opcode handler supports each opcode, refer to the opcode registration list).  For example, the TK_Color_By_Index opcode handler supports both the TKE_Color_By_Index opcode and TKE_Color_By_Index_16 opcode.
Additionally, some of the TK_XXX classes are not actually opcode handlers, but rather serve as utility classes which simply export more than one opcode.  For example the TK_Header class will export both the TKE_Comment opcode (with contents denoting the file version) and TKE_File_Info opcode.  The version number that will be exported by default is the latest version of the HSF file that the HOOPS/Stream Toolkit supports.   This is defined in BStream.h via the  TK_File_Format_Version define.
 

2.1.3 Compression

The HOOPS/Stream toolkit supports lossless LZ compression of the exported data.  To enable compression, export the TKE_Start_Compression opcode.  The toolkit will automatically be placed in 'compression mode' after this opcode is exported, and all subsequently exported opcodes will be compressed.  To stop compression, export the TKE_Stop_Compression opcode.   Typically, the TKE_Start_Compression opcode would be exported at the beginning of the file (but just after the TKE_Comment opcode which contains version information), and the TKE_Stop_Compression opcode would be exported at the end of the file (but just before the TKE_Termination opcode)  Because this file wide LZ compression capability is lossless, provides good compression results, and is fairly efficient during both export and import, it should always be used.
 
Example:
Let's say we want to write out an HSF that contains a  'segment' opcode, and have the segment contain a single 'marker' opcode.  (A marker is denoted by a single 3D point.)  The HSF file would need to have the following structure:
<TKE_Comment> 
<TKE_File_Info>
<TKE_Start_Compression> 
<TKE_Open_Segment>
<TKE_Marker>
<TKE_Close_Segment>
<TKE_Stop_Compression>
<TKE_Termination> 
The code required to create this HSF file is here.
 

2.1.4 Using the TKE_View Opcode

It is very useful to store some information at the beginning of the file which denotes the extents of the scene, so that an application which is going to stream the file can setup the proper camera at the beginning of the streaming process. Otherwise, the camera would have to continually get reset as each new object was streamed in and the scene extents changed as a result.
The TKE_View opcode is designed for this purpose.  It denotes a preset view which contains camera information, and has a name.  An HSF file could have several TKE_View objects, for example,  to denote 'top', 'iso', and 'side' views. 
The HOOPS Stream Control and Plug-In (an ActiveX control and Netscape Plug-In that can stream in HSF files over the web), along with the various PartViewers provided by Tech Soft America, all look for the presence of a TKE_View object near the beginning of the HSF file with the name 'default'.  If one is found, then the camera information stored with this 'default' TKE_View object is used to setup the initial camera.
If you (or your customers) are going to rely on the Stream Control or Plug-In to view your HSF data, then you should export a 'default' TKE_View opcode as discussed above.  If you are going to create your own HSF-reading application to stream in HSF files that you've generated, then that application should have some way of knowing the extents of the scene at the beginning of the reading process; this can only be achieved if your writing application has placed scene-extents information at the beginning of the HSF file (probably by using the TKE_View opcode), and your reader is aware of this information.
 
Example:
An HSF with the TKE_View opcode, along with a segment containing polyline and marker objects would look like:
<TKE_Comment> 
<TKE_File_Info> 
<TKE_View>
<TKE_Start_Compression>
<TKE_Open_Segment>
<TKE_Polyline>
<TKE_Marker>
<TKE_Close_Segment>
<TKE_Stop_Compression>
<TKE_Termination> 
The code required to create this HSF file is here.
 

2.1.5 Referencing External Data Sources

The TKE_External_Reference opcode is used to represent a reference to external data sources. The reference would typically be a relative pathname but could also be an URL. This opcode is intended to be handled in a manner similar to TK_Referenced_Segment, where the scene-graph information located in the reference should be loaded into the currently open segment. For example, a reference of './left_tire.hsf' located immediately after a TKE_Open_Segment opcode would indicate that the HOOPS/3dGS scene-graph contained in left_tire.hsf should be created within the open segment. A reference of http://www.foobar.com/airplane.hsf would indicate that the .hsf resides at a website, and the reader must access the data (it may choose to first download the entire file and then display it, or stream the data in and display it incrementally)
 

2.1.6 Controlling the Quality of the Streaming Process

The quality of the graphics streaming process is essentially based on how quickly the user gets an overall feel for the scene. One common technique involves exporting lower Levels of-Detail (LODs) for 3D objects within the scene since they can stream in more quickly.  Another  technique involves ordering objects within the file so that the most important objects in the scene are ordered towards the front of the file.   Objects which are larger and closer to the camera are typically the most important. 
While the HOOPS/3dGS-specific 3dgs classes provide built in logic to create LOD  representations of objects, as well as logic to smartly order geometry within the file (exporting LOD representations first and sorting them base on cost::benefit ratio), such logic is not currently supported by the base classes.  This is primarily because the BStreamFileToolkit object doesn't 'know' where the data is, or how it is arranged.  Since the developer is manually traversing their own graphics information and mapping it to HSF objects, LODs must be manually generated/exported and any ordering/sorting would need to be done by the developer.
 

2.1.7 Creating an HSF with LODs

A more practical example of an HSF file is one that contains a 'real world' scene-graph, including:
- shells containing several LODs, local attributes and compression/write options
- modeling matrices
- inclusions (instancing)
- colors, etc... 
After reviewing the Write Options section below, let's take the case where we want to write out the following scene-graph:
 


Since the main segment tree references other segments (each reference is denoted by a TKE_Include_Segment object), the segments in the 'include library' must come first in the file.  Typically, it is desirable to have any LOD representations read in first so that the reading application (which may be incrementally streaming in the data) can quickly provide a rough depiction of the scene.  Therefore, we need to store LOD representations of shells at the beginning of the file.   The HOOPS/Stream Toolkit supports the concept of tagging, which enables the developer to first output a LOD representation of the shell, and then later output another LOD representation (or the full representation) of that same shell and associate back to the original shell.   If you want to be able to maintain this association during reading, you must follow tagging procedures which are discussed later on, in  section 4.3: Tagging HSF Objects. Since the graphical information is coming from a custom set of data structures, you will need to provide your own LOD representations for shells. 
Note: LOD support contained in the HOOPS 3D Graphics System could still be leveraged in the case where you are manually creating an HSF file, where you could call the HOOPS/3dGS utility function 'HC_Compute_Optimized_Shell' to generate LODs.  This requires access to the HOOPS/3dGS API, available as part of the HOOPS 3D Application Framework.  Contact Tech Soft America for HOOPS/3dAF licensing details.
The following is one possible structure of the HSF file which represents the above scene-graph and orders the various representations of the shell primitives: 
<TKE_Comment> 
<TKE_File_Info> 
<TKE_View>
<TKE_Start_Compression>
<TKE_Colormap>
<TKE_Open_Segment>                // /include_library/object_1 segment
<TKE_Shell>                       // id=13, LOD 2 - output LOD 2 
<TKE_Close_Segment> 
<TKE_Open_Segment>                // /include_library/object_2 segment
<TKE_Shell>                       // id=14, LOD 2 - output LOD 2
<TKE_Close_Segment> 
<TKE_Open_Segment>                // part_1 segment
<TKE_Include_Segment>             // include the object_1 segment
<TKE_Modelling_Matrix>            // give it a unique mod matrix
<TKE_Color_RGB>                   // apply a local color
<TKE_Close_Segment> 
<TKE_Open_Segment>                // part_2 segment
<TKE_Include_Segment>             // include the object_2 segment
<TKE_Modelling_Matrix>            // give it a unique mod matrix
<TKE_Close_Segment> 
<TKE_Shell>      // id=13, LOD 1  -  output LOD 1 for the shells
<TKE_Shell>      // id=14, LOD 1
<TKE_Shell>      // id=13, LOD 0  -  output LOD 0 which is the original
<TKE_Shell>      // id=14, LOD 0 
<TKE_Close_Segment>
<TKE_Stop_Compression>
<TKE_Termination> 
 
The code required to create this HSF is here.  Note how the example reuses opcode handlers in cases where more than one object of a specific type is going to be exported to the HSF file.
 
Non-Shell LODS
The LOD representation for a shell object is not restricted to a shell, but can be composed of one or more non-shell objects.  For example, a circle or several polylines could be used as the LOD representation for a shell. 
This is achieved by calling the TK_Shell::AppendObject method for each primitive to be used as part of the non-shell LOD representation.  This would be called during initialization of the TK_Shell data  (typically performed within the Interpret method)  TK_Shell::AppendObject does not make any copies of the object passed into it; it only stores a pointer to objects.   Therefore, all objects need to be manually cleaned up before using the shell opcode handler again, or when deleting the object. TK_Shell::PopObject should be used to obtain the pointer to the next object and remove it from the shell handler's list of LOD objects. 
The sample code reviewed above (simple_hsf3.cpp) includes an example of using a non-shell LOD (a circle) to represent LOD level 2 of the shell.   Note that cleanup of the non-shell LOD object(s) is performed within the overloaded Reset method, which calls the base class' Reset method.  This ensures that when the shell opcode handler is reset, everything will be properly cleaned up before the opcode handler object is reused.  The sample code performs cleanup o the non-shell LOD objects in the Reset method instead of an overloaded constructor method because it reuses the custom shell opcode handler. 
 
 

2.1.8 Writing Examples

We've seen examples of how to export several opcodes to an HSF file. Most opcodes are 'self-contained', and it is fairly easy to see how to initialize them by looking at the definition of the associated opcode-handler class. The protected data members must be initialied, and public functions are provided for doing so. However, some graphical attributes are more complicated in that they require export of several opcodes. This section will cover more complex situations such as these, and will evolve over time.
One of the more complex graphicsl attributes is textures. Recalling that HSF objects are essentially archives of HOOPS/3dGS scene-graph objects, it is useful to review how texture-mapping works in HOOPS/3dGS. First, an image must be defined. Then, a texture must be defined which refers to that image. The color of the faces of a shell (or mesh) must be set to the texture name, and finally, the vertex parameters must be set on the vertices of the shell (which map into the texture)
To export this info to an HSF file, the following opcodes must be exported:
1. TK_Image
2. TK_Texture (this must be exported after TK_Image, since it refers to it)
3. TK_Color (this must be exported after TK_Texture, since it refers to it)
An example of how to export a shell with a texture applied is located here.
 

2.1.9 Write Options 

The HOOPS/Stream Toolkit supports a variety of compression and streaming options which are used when exporting an HSF file.  It may be desirable to modify these settings based on how your model is organized, the size of the model, and the amount of preprocessing time that is acceptable.  
Write options are set on the toollkit by calling BStreamFileToolkit::SetWriteFlags   File write options are specified by TK_File_Write_Options enumerated type in BStream.h
When using the base classes to manually map graphics information to an HSF file, only a subset of the file-write-options are supported, and the details are listed in each of the option descriptions below:
 
Supported
Unsupported
TK_Full_Resolution_Vertices
TK_Suppress_LOD
TK_Full_Resolution_Normals
TK_Disable_Priority_Heuristic
TK_Full_Resolution
TK_Disable_Global_Compression
TK_Force_Tags
TK_Generate_Dictionary
TK_Connectivity_Compression
TK_First_LOD_Is_Bounding_Box
Those in the "unsupported" column are there because they only make sense in the context of a specific graphics system, and dictate overall file organization. (They are supported by the '3dgs' classes)  Users of the base classes are free to implement them (or not implement them), according to the needs of their application. All bits are by default off (set to zero). The following reviews the various types of options, along with their default values and usage:
 
A.  Compression
Global Compression
The toolkit performs LZ compression of the entire file using a public domain component called 'zlib'; this is a lossless compression technique that permits pieces of the compressed file to be streamed and decompressed, and is computationally efficient on both the compression and decompression sides.
Usage:  off by default; needs to be manually enabled by exporting TKE_Start_Compression and TKE_Stop_Compression opcodes to the file.   Setting TK_Disable_Global_Compression will have no effect.
The HOOPS/Stream Toolkit will also compress raster data by default, using a JPEG compression utility. The compression level of this data can be controlled by calling BStreamFileToolkit::SetJpegQuality
 
Geometry Compression
Geometry compression is currently focused on the 'shell' primitive, (represented by the TKE_Shell opcode, and handled by the TK_Shell class) This is the primary primitive used to represent tessellated information.  Datasets typically consist primarily of shells if the data sets originated in MCAD/CAM/CAE applications.
A TK_Shell object has local write suboptions which may or may not reflect the directives from the BStreamFileToolkit object's write options. A public function, TK_Shell::InitSubop() is available to initialize the write suboptions of TK_Shell with the BStreamFileToolkit write options.   You should setup your desired write options on the BStreamFileToolkit object, and then call InitSubop within your shell opcode-handler's constructor or Interpret function.  The shells' local suboptions may also be directly modified by calling TK_Shell::SetSubop(), and passing in any combination of the options defined in BOpcodeShell.h
B.  Dictionary
Part of the HSF specification is a "dictionary" of file offsets.  Its main purpose is to allow selective refinement of graphic database detail.  The 3dgs classes will write such a dictionary at the end of the file if the TK_Generate_Dictionary write option is set.  Though it would also be possible to create a dictionary with the base classes, there is not yet a public interface to do so.  Users of the base classes who would like to take advantage of this area of HSF should contact technical support. 
C.  LOD Options 
Three of the file write options (TK_Suppress_LOD, TK_First_LOD_Is_Bounding_Box and TK_Disable_Priority_Heuristic) control the existence and/or appearance of levels of detail. As with geometry compression (see above), these options are currently geared towards the TK_Shell opcode.

D.  Tagging
The toolkit supports the concept of tagging, discussed in section 4.3: Tagging HSF Objects  Setting  TK_Force_Tags will cause tags to be automatically generated by the toolkit during the writing process.  (Note: tags will always be generated for shells regardless of the value of this write option.)
 
E.  Global Quantization
Setting TK_Global_Quantization will cause any required quantization to be global (bbox of scene) instead of local (bbox of individual geometry) . This is useful for situations where high-level objects are split up into mulitple shells, since it avoids cracks between the sub-objects (Using a solid modeling example, this would be a situation where a shell was used for each 'face', instead of using a single shell for each higher-level 'body'.) Regardless of this flag, however, local quantization applies until the first TKE_Bounding_Info. This flag is off by default.
 

 

2.2 Reading

The function supplied by the base classes to perform black-box reading of a HOOPS Stream File is TK_Read_Stream_File
Black-box reading using TK_Read_Stream_File requires the following steps:
It is only necessary to register opcode handlers to deal with the HSF opcodes objects that are of interest (and will be mapped to custom data structures).  Opcodes in the file which do not have a custom opcode handler registered for them will be handled by the BStreamFileToolkit's 'default' opcode handler; this default handler will simply skip over the opcode.   The toolkit can notify you of HSF file objects which do not have a custom opcode-handler registered for them, if you set the TK_Flag_Unhandled_Opcodes bit in the flags parameter of the reading function.  (It will return TK_Error for unhandled opcodes)
Let's take the case where the reading application only cares about reading the segment and shell objects from HSF files.   This could be supported via the following code:
#include "BStream.h"  
void my_reading_function()  
   {  
          TK_Status  status;  
    BStreamFileToolkit * tk = new BStreamFileToolkit;  
    tk->SetOpcodeHandler (TKE_Open_Segment, new    TK_My_Open_Segment); 
       tk->SetOpcodeHandler (TKE_Close_Segment, new TK_My_Close_Segment);    
       tk->SetOpcodeHandler (TKE_Shell, new TK_My_Shell);  
    status = TK_Read_Stream_File("sample.hsf",    tk);  
    if (status == TK_Version)  
       { 
           MessageBox("This file was created    with a newer version of the 
                         HOOPS/Stream Toolkit.\nTo view it this application's 
                         version of the toolkit will need to be updated."); 
       } else if (status = TK_Error) 
           MessageBox("Error reading file.");    
   } 
     
     
      

2.3 Controlling the Reading and Writing Process

2.3.1 Overview

In addition to the high-level read/write functions which support reading from and writing to a disk file,  the HOOPS/Stream Toolkit also supports writing and reading HOOPS Stream File information to and from a user-specified location.  This is a powerful feature which enables the application developer to store the HOOPS Stream File information within a custom application specific file format (or any location) and retrieve it from the custom location, rather than use a separate .hsf file.   More importantly, the data can be incrementally streamed into the reading application's scene-graph.
For example, many technical applications that also visualize 2D/3D information utilize a custom file format that contains application specific data.   When the file is read in, the application then goes through a laborious process of recreating the 2D/3D information associated with the application data.    By utilizing the HOOPS/Stream Toolkit, a developer could cache the scene-graph geometry in their own proprietary file format file by actually embedding the .hsf information in their file.  File load time and initial rendering is drastically reduced, the custom file format remains intact, and the highly compressed .hsf information minimizes the increase of file size.
Support for controlling the reading and writing process is provided by the BStreamFileToolkit class.  An instance of an BStreamFileToolkit object should be created for each file that is being read or written, and then either the ParseBuffer or GenerateBuffer method should be called to control reading and writing, respectively. 
 

2.3.2 Controlling Reading

First review section 2.2: Reading HSF Files.  To control the reading process, a piece of binary data that has been read from an .hsf file is presented to the BStreamFileToolkit object for parsing and insertion into your custom data structures by calling the BStreamFileToolkit::ParseBuffer method.  This method doesn't care where the data originated from, but simply reads the data from the buffer passed to it, and calls the Read and Execute methods of the opcode handler registered to handle the current opcode being processed.  Therefore, if you want to access custom HSF objects, you will need to have first registered custom opcode handlers for the objects of interest (and implement the Execute methods to do something with the data.) 
The following code example demonstrates how data could be manually read from a local file and inserted into your custom data structures using ParseBuffer.   A file is open and pieces of data are read from it using the BStreamFileToolkit wrapper functions for file opening and reading ( OpenFile() and ReadBuffer() )   Data is continually read and passed  to ParseBuffer until it returns TK_Complete,  indicating that reading is complete, or until an error occurs. 
 
void Read_Stream_File (char const * filename)  
   { 
       auto        char                       block[BUFFER_SIZE]; 
       auto        TK_Status                  status = TK_Normal; 
       auto        int                        amount;  
    BStreamFileToolkit * tk = new BStreamFileToolkit;  
    // our sample custom toolkit only cares about    segment and shells 
      tk->SetOpcodeHandler (TKE_Open_Segment, new TK_My_Open_Segment);    
      tk->SetOpcodeHandler (TKE_Close_Segment, new TK_My_Close_Segment);    
      tk->SetOpcodeHandler (TKE_Shell, new TK_My_Shell);  
    if ((status = tk->OpenFile (filename)) != TK_Normal)    
           return status;  
    do { 
           if (tk->ReadBuffer (block, BUFFER_SIZE,    amount) != TK_Normal) 
               break;  
        status = tk->ParseBuffer    (block, amount);  
        if (status == TK_Error)    { 
               // whatever...    
               break; 
           } 
       } while (status != TK_Complete);  
   tk->CloseFile ();  
    delete tk; 
   } 
      

2.3.3 Controlling Writing

Controlling writing using the base classes is already explained in Section 2.1:  Writing HSF Files  Since writing out an HSF file using the base classes must be done manually anyway (because the developer has to supply their own logic ot traverse the graphics information and directly export HSF objects), exporting to a buffer rather than a file is just a special case of the WriteObject function described in the example programs in Section 2.1  The only difference would be to omit the 'fwrite' call, and deal with the HSF data buffer directly.  (Perhaps by sending it to another application, or exporting it to your own non-HSF file, etc...)
 

2.4 Verifying HSF Files

Licensees of HOOPS/Stream who chose to use '.hsf' as their file name extension are contractually required to write compliant HSF files.
Therefore, it is highly recommended that you take steps to verify the correctness of HSF files that you export using the base classes, since there is the potential of incorrectly formatting the data (especially user-data) or incorrectly organizing the scene-graph.  For example, every TK_Open_Segment opcode must be matched by TK_Close_Segment opcode.  Testing can be performed in two phases:
 
Basic Testing
The SDK includes a Reference Application called the MfcHoopsRefApp under Windows distributions, and QtHoopsRefApp under Unix distributions.  It is located in the /bin directory.  After creating an HSF file, read it into the Reference Application.  If there is an error with the formatting of the HSF data, the app will generate a corresponding message box and Advanced Testing should be performed (discussed below).   If no message is generated, then the formatting of the data in the HSF file is valid. 
Correct formatting of the data is not to be confused with the validity of the scene-graph represented in the HSF file.  If there is a problem with how the scene-graph is organized, the reference application will generate a HOOPS/3dGS-specific error.  This error should be used to try and locate the problem with how the scene-graph was organized.  Additionally, after a problematic file has finished loading, it can be useful to export it to an 'HMF' file from the application.  This is a readable, ASCII representation of the scene-graph.  Inspection of this file may provide clues to the problem with scene-graph organization.
 
Advanced Testing
If the Reference Application reports that there was an error with reading the file, then further steps must be taken to determine the problem. If user data is being written out (via the TKE_Start_User_Data opcode, which can be manually exported or exported using the TK_User_Data opcode handler) confirm that the data is being properly formatted per the notes in Section 4.1: Customizing HSF Objects   This type of error could also be due to a bug in the writing or reading logic of the toolkit itself. 
The toolkit provides logfile capabilities to help track down such problems.  To use these capabilities, perform the following steps: 
1.  call BStreamFileToolkit::SetLogging(true) 
2.  re-export the HSF file; the toolkit will create a file called hsf_export_log.txt which contains a byte representing each opcode that was exported. 
3.  read the HSF file using the  TK_Read_Stream_File function as reviewed in Section 2.3.2: Controlling Reading (ensure that any custom opcode handlers that you've created are registered with the BStreamFileToolkit object passed into TK_Read_Stream_File) The toolkit will create a file called hsf_import_log.txt which contains a byte representing each opcode that was imported. 
4.  Compare the opcodes in the two log files and look for the first opcode where they differ (if any).  It is likely that the first pair of matching opcodes has a problem with data formatting (or the toolkit has a bug).  If you cannot find any problem with how you've formatted or exported the data, submit the problematic HSF file and the log files to technical support. 
 
 

2.5  HOOPS/3dgs Classes

As previously mentioned, the HOOPS/3dGS-specific classes encapsulate the work of traversing/querying/exporting the HOOPS/3dGS scene-graph to an HSF file, as well as the work of reading an HSF file and mapping HSF objects to a HOOPS/3dGS scene-graph.  Because HOOPS/3dGS-specific classes are derived from the base classes,  (performing the above logic in overloaded versions of the Interpret and Execute methods), they provide a valuable reference of how to use the base classes, and their source code is included with the toolkit.

3. Streaming an HSF File

Streaming of 3D data typically refers to a process whereby graphical information is  retrieved from a remote location such as a website or server and is displayed as soon as it is received by the client application.  It allows the end-user to quickly obtain some visual feedback, as well as interact with the scene as it is still being displayed. 
 

3.1 Basic Streaming

The following steps are necessary to add support for streaming an HSF into an application:
In review, it is up to the developer to map the data to their custom data-structures within the Execute method of their custom opcode handlers when using the base classes. 
The following code demonstrates how an HSF file called 'factory.hsf'  could be streamed into the HOOPS database and incrementally drawn, using the base classes:
 
void Stream_HSF_File (char const * filename)  
   { 
       auto        char                       block[BUFFER_SIZE]; 
       auto        TK_Status                  status = TK_Normal; 
       auto        int                        amount;  
    BStreamFileToolkit * tk = new BStreamFileToolkit;  
    // our sample custom toolkit only cares about    segment and shells 
       tk->SetOpcodeHandler (TKE_Open_Segment, new TK_My_Open_Segment);    
       tk->SetOpcodeHandler (TKE_Close_Segment, new TK_My_Close_Segment);    
       tk->SetOpcodeHandler (TKE_Shell, new TK_My_Shell);  
    if ((status = tk->OpenFile (filename)) != TK_Normal)    
           return status;  
    do { 
           if (tk->ReadBuffer (block, BUFFER_SIZE,    amount) != TK_Normal) 
               break;  
        status = tk->ParseBuffer    (block, amount);  
        MyGraphicsUpdateFunction();    
      
        if (status == TK_Error)    { 
               // whatever...    
               break; 
           } 
       } while (status != TK_Complete);  
    tk->CloseFile ();  
    delete tk; 
   } 
      

3.2 Performing Streaming on a Separate Thread

The previous example is intended to demonstrate how data streaming would be performed on the same thread as the application.  The main application code and the data reading/streaming process all happen sequentially and synchronously within the same thread, and control does not return to the main application loop until reading is complete. However, it may be desirable to have the data read from the file asynchronously, independent of application processing.  This would allow the user to interact with the application in a normal fashion, while data is still being streamed from the file which may be local, or could be coming in via a slower intranet or internet connection. 
This can be supported by performing the reading on a thread which is separate from the main application thread.  After this thread reads in data of a user-specified size, it would post a message to the main application event loop indicating that it has a new chunk of data ready for processing.  Then, the main application loop would pass that data to the ParseBuffer function for processing and subsequent insertion into your application database or custom graphics data structures. 
Creation of a separate thread and posting a message to the main application loop involves platform and graphical user-interface (GUI) specific logic. 

 
 

4. Customizing the HSF File

In addition to writing and reading a standard HOOPS Stream File, the HOOPS/Stream Toolkit provides support for storing and retreiving user-defined data in the HSF file.  This data could be associated with HSF objects, or it could simply be custom data which is convenient to store inside an HSF. The toolkit also supports tagging of objects in the HSF file, which allows for association of HSF objects with user data.   The TKE_Start_User_Data opcode is used to represent user data.
 

4.1 Customizing HSF Objects

This section reviews the process of creating customized versions of default HSF objects.  This is achieved by replacing the BStreamFileTookit's default handler for a particular opcode with a custom opcode handler which is derived from the default handler class.  The custom opcode handler would provide support for writing and reading additional user-data. 
For example, let's say we wanted to write out an extra piece of user-data at the end of each piece of 'shell' geometry, (and of course retrieve it during reading) that represents a temperature value for each of the vertices in the shell's points array.   Given that the shell primitive is denoted by the TKE_Shell opcode, and handled by the TK_Shell opcode-handler, this would involve the following steps:
 
 
1.  Define a new class derived from TK_Shell that overloads the Write and Read methods to process the export and import of extra user data.
As previously mentioned, query/retrieval of the user data from custom data structures during the writing process would typically occur within the Interpret method of the opcode handler.  Similarly, mapping of the imported user data to custom application data structures would typically occur in the Execute method.  However, this work can be performed in the Write and Read methods as well, as the example indicates.
 
The following sample header expands upon the sample My_TK_Shell object reviewed in Section 2.1:  Writing an HSF,  by also overloading the Read and Write methods. 
 
#include "BOpcodeShell.h"  
class My_TK_Shell : public TK_Shell  
   { 
       protected:  
        int       my_stage;   // denotes the current processing stage  
    public:  
        My_TK_Shell() { my_stage    = 0; }  
        TK_Status      Execute (BStreamFileToolkit & tk) alter; 
           TK_Status   Interpret (BStreamFileToolkit    & tk, HC_KEY key,  
                                     int lod=-1) alter;  
        TK_Status      Read (BStreamFileToolkit & tk) alter; 
           TK_Status   Write (BStreamFileToolkit    & tk) alter;  
 
TK_Status   Clone (BStreamFileToolkit & tk,        BBaseOpcodeHandler **) const;                              
        void           Reset () alter; 
   }; 
     
      
2.  Implement the custom Write function.
This is done in stages, each of which correspond to the discrete pieces of data that need to be written out for the custom shell.  We use different versions of the BStreamFileTookit's PutData method to output the user data, and we return from the writing function during each stage if the attempt to output the data failed. (This could happen due to an error or because the user-supplied buffer is full.) At this point, review the process of Formatting User Data.
The following lists in detail the 5 writing stages for our custom shell opcode-handler :
Stage 0:  Output the default TK_Shell object by calling the base class' Write function 
( TK_Shell::Write )
Stage 1-4:  These stages write out the custom data (the temperature array) as well as formatting information required to denote a block of user data. 
1.  Output the TKE_Start_User_Data opcode to identify the beginning of the user data 
2.  Output the # of bytes of user data.
3.  Output the user data itself. 
4.   Output the TKE_Start_User_Data opcode to identify the end of the user data
 
TK_Status My_TK_Shell::Write (BStreamFileToolkit & tk)     
   { 
       TK_Status       status; 
    switch (m_stage)  
       { 
           // call the base class' Write function    to output the default  
           // TK_Shell object 
          case 0:  
           { 
               if ((status    = TK_Shell::Write(tk)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   nobreak;  
        // output the TKE_Start_User_Data    opcode 
           case 1:  
           { 
               if ((status    = PutData (tk, (unsigned 
                                      char)TKE_Start_User_Data)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   nobreak;  
       // output the amount of user    data in bytes; we're writing out  
           // 1 float for each vertex value,    so we have 4*m_num_values 
           case 2:  
           { 
               if ((status    = PutData (tk, 4*m_num_values)) != TK_Normal) 
                      return status;  
           m_progress    = 0;  
               my_stage++; 
           }   nobreak;   
        // output our custom    data, which in this example is an array of 
           // temperature values which are stored    in an application 
           // data structure called 'temperature_values'    
           // since the temperature values array    might always be larger 
           // than the buffer, we can't just    "try again" so always generate 
           // piecemeal, with m_progress the    number of values done so far 
           case 3:  
           {  
               if ((status    = PutData (tk, temperature_values, 
                                         m_num_values)) != TK_Normal)  
               my_stage++;  
        }   break;  
       case 4:  
          { 
          // output the TKE_End_User_Data opcode    which denotes the end 
               // of user    data 
               if ((status    = PutData (tk, (unsigned 
                                        char)TKE_End_User_Data)) != TK_Normal) 
                      return status; 
               my_stage    = -1; 
          }   break;  
       default: 
              return TK_Error;    
       }  
    return status; 
   } 
     
      
3.  Implement the custom Read function  This is also done in stages, each of which correspond to the discrete pieces of data that need to be read in for the custom shell.  We use different versions of the BStreamFileTookit's GetData method to retreive data, and we return from the reading function during each stage if the attempt to retreive the data failed. Otherwise, the stage counter is incremented and we move on to the next stage. 
The stages during the reading process are analogous to the stages during the writing process outline above, with one exception.   The  TKE_Start_User_Data opcode would still be read during 'Stage 1', but rather than blindly attempting to read our custom data, we need to handle the case where there isn't any user data attached to this shell object.  Perhaps the file isn't a custom file, or it was a custom file and this particular shell object simply didn't have any user data appended to it. 
It is also appropriate at this time to bring up the issue of versioning and user data; it is also possible that there is user data following this shell object, but it is not 'our' user data.  Meaning, it is not temperature data that was written out by our custom shell object, and therefore it is data that we don't understand; as a result, we could attempt to read to much or too little data.  If custom versioning information was written at the beginning of our custom file, and this versioning information was used to verify that this was a file written out by our custom logic, then it is generally safe to proceed with processing user data since we 'know' what it is.  The versioning issue, including details on how to write custom versioning information in the  file, is discussed in more detail in the next section 4.2: Versioning and Storing Additional User Data
Note that to check if there is any user data, we first call LookatData to simply look at (but not get) the next byte and verify that it is indeed a TKE_Start_User_Data opcode.  If not, we return.
 
TK_Status My_TK_Shell::Read (BStreamFileToolkit & tk)     
   { 
       TK_Status       status;  
    switch (my_stage)  
          { 
           case 0: { 
               if ((status    = TK_Shell::Read (tk)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   nobreak;  
        case 1:  
           { 
               unsigned    char temp;  
               // look at the next byte since it may not be the 
               // TKE_Start_User_Data    opcode 
               if ((status    = LookatData(tk, temp)) != TK_Normal) 
                      return status;  
               if (temp != TKE_Start_User_Data) 
                      return TK_Normal;   // there isn't any user data, so return!   
               // get the opcode from the buffer 
               if ((status    = GetData (tk, temp)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   nobreak;  
        case 2:  
           { 
               int length;  
       // get the integer denoting    the amount of user data 
               if ((status    = GetData (tk, length)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   break;  
        case 3:  
           { 
               // get the    temperature value array; this assumes we've 
               // already    determined the length of the array and identified 
               // it using    m_num_values 
               if ((status    = GetData (tk, temperature_values, 
                                         m_num_values)) != TK_Normal) 
                      return status; 
               my_stage++;    
           }   break;  
        case 4:  
           { 
               unsigned    char temp;  
       // get the TKE_End_User_Data    opcode which denotes the end of 
               // user data    
               if ((status    = GetData (tk, temp)) != TK_Normal) 
                      return status;  
               if (temp != TKE_End_User_Data) 
                    return TK_Error;  
               my_stage = -1; 
           }   break;  
        default: 
               return TK_Error;    
       }  
    return status; 
   } 
      
4.  Implement the custom Reset Function
The toolkit will call the opcode handler's Reset function after it has finished processing the opcode.   This method should reinitialize any opcode handler variables, free up temporary data and then call the base class implementation.
void My_TK_Shell::Reset() 
   { 
      my_stage = 0; 
      TK_Shell::Reset(); 
   } 
     
5. Implement the custom Clone function
TK_Status My_TK_Shell::Clone (BStreamFileToolkit & tk, BBaseOpcodeHandler    **newhandler) const 
{ 
*newhandler = new My_TK_Shell(); 
if ( *newhandler != null )      
 return TK_Normal; 
else 
return tk.Error(); 
} 

 

6.  Instruct the toolkit to use our custom shell opcode handler in place of the default handler by calling SetOpcodeHandler.  We specify the type of opcode that we want to replace, and pass in a pointer to the new opcode handler object.
    tk->SetOpcodeHandler (TKE_Shell, new My_TK_Shell);  
This will also cause the toolkit to delete it's default handler object for the TKE_Shell opcode. Note:  As the HOOPS/Stream Reference Manual points out,  all opcode handler objects stored in the BStreamFileToolkit object will be deleted when the BStreamFileTookit object is deleted.  Therefore, we would not delete the My_TK_Shell object created in the above example. 
 
 

4.2 Versioning and Storing Additional User Data

4.2.1 Versioning
As discussed in the previous section, one way for you to check if the file contains custom HSF objects that you know/care about is to always use custom opcode handlers, which then check during the reading process to see if there is user data.  However, there is one deliberate flaw to the example approach and its corresponding sample code.   If the code sees a custom chunk of data following the default TK_Shell object (by noticing a TKE_Start_User_Data opcode), it simply goes ahead and reads the data, assuming that it was data written out by our custom shell handler.  However, what if the file was written out by a custom handler that was not ours?!  In this case, we wouldn't understand the information and don't care about it.  However, the sample code does not properly check if the data is something that we know/care about.  Because it is assuming a specific amount of user data, and this is an unsafe assumption, the code is flawed.
One potential solution is to add another stage during the writing process:  after writing out the TKE_Start_User_Data opcode and the # of bytes of custom data, we could also write out some special value which 'marks' the custom data as 'our' custom data.  Then, during reading, we would check that special value to confirm if it was our data.  However, this solution is a bit cumbersome since it means that our custom logic would always need to be executed, and to properly handle the case, we'd also have to either A) peek at the data up through the special value and then return from the function (so that the default toolkit will skip the custom data)  or B) manually skip through the custom data ourselves by utilizing the '# of bytes' information. 
A better solution would be to store some type of additional versioning information in the beginning of the file which could be checked once, and then we would create and register our custom HSF object handlers only if the file was verified to be a custom version that we created with our custom toolkit.  Recalling that the first opcode in an HSF file is always a TKE_Comment opcode (with contents that are specifically formatted to denote file version information), you could export another TKE_Comment opcode immediately after the first one with contents that contain additional version information.  For example:
 
<TKE_Comment> standard version information; contents:  HSF V6.30
<TKE_Comment> custom version information; contents:  SuperCAD V2.00
<data opcode>
      .
      . 
      .
<data opcode>
<TKE_Termination> 
 
 
The following section details how additional information could be added at the beginning of the file (prior to default HSF objects) as well as at the end of the file.
 
4.2.2 Storing Additional User Data
In addition to providing support for attaching/retreiving user data to/from default HSF objects (by enabling overloading of the Write and Read methods of opcode handlers),  the HOOPS/Stream Toolkit also provides general support for exporting user data via the TK_XML, TK_User_Data and TK_URL opcode handlers, which export the TKE_XML, TKE_Start_User_Data, and TKE_URL opcodes, respectively.   This gives developers the ability to store discrete chunks of user data that may (or may not) be associated with the HSF objects.  The TK_XML opcode handler would be used to store/retreive XML data, and the TK_User_Data opcode handler would be used to store/retrieve custom binary data. The TK_URL opcode handler provides informational links corresponding to data (as opposed to TKE_External_Reference which provides additional content)..
When writing out user data within the Write method of your custom TK_User_Data object, be sure to review the process of Formatting User Data.
To handle import/export of user data, you will need to register a custom opcode handler for the TKE_Start_User_Data opcode.  This is because the toolkit's default handler (TK_User_Data)  simply skips over the user data that is read in.   (Remember that custom opcode handlers such as My_TK_Shell described in the previous section typically only handle user data that is appended to a default HSF object.  If you are adding discrete chunks of user data to the file, then you must Write/Read that data with an entirely new TK_User_Data handler)   The following steps are involved:
 
1.  Define a new class derived from TK_User_Data (which we'll call TK_My_User_Data) that overloads the Write and Read methods to process the extra user data.
#include "object.h"  
class TK_My_User_Data : public TK_User_Data 
   { 
       protected:  
        int       my_stage;   // denotes the current processing stage  
    public:  
        TK_My_User_Data(unsigned    char opcode) : TK_User_Data(opcode) {}  
        // Within Read(), we    may need to verify that the user data is 'our' 
           // user data.  As previously    noted, one approach is to write out 
           // versioning information at the    beginning of the file. 
           // If it is not our custom version    of the file, we would NOT 
           // even register this custom user    data opcode handler; instead 
           // we would allow the default TK_User_Data    handler to take care of 
           // the TKE_Start_User_Data opcode    by simply skipping over any user data  
        virtual TK_Status      Read (BStreamFileToolkit & tk) alter;  
        virtual TK_Status      Write (BStreamFileToolkit & tk) alter; 
   }; 
      
2.  Instruct the toolkit to use our custom user data opcode handler in place of the default handler by calling SetOpcodeHandler.  We specify the type of opcode that we want to replace, and pass in a pointer to the new opcode handler object.
    tk->SetOpcodeHandler (TKE_Start_User_Data,     
                                new TK_My_User_Data(TKE_Start_User_Data)); 
This will also cause the toolkit to delete it's default handler object for the TKE_Start_User_Data opcode. Note:  As the HOOPS/Stream Reference Manual points out,  all opcode handler objects stored in the BStreamFileToolkit object will be deleted when the BStreamFileTookit object is deleted.  Therefore, we would not delete the TK_My_User_Data object created in the above example. 
Custom handling of the TKE_XML opcode would be similar to the above, but you would instead register a custom opcode handler for the XML opcode that is derived from TK_XML.
 

4.3 Tagging HSF Objects to Associate User Data 

If user data needs to be associated with geometry, objects of interest can be 'tagged' by the HOOPS/Stream Toolkit when they are written to the file. 

The following provides a detailed example of how to use tags:
Let's first assume that we have a 'polyline' primitive in our application data structures with an ID of 1000, and that our application-specific data has a data structure associated with the polyline.   When we want to save out our custom data (which could be in either the HSF file or a separate file), let's assume that the data structure will have an ID of 50.
Writing:
1.  When we output the polyline, we must call BStreamFileToolkit::SetKey(1000) before the opcode handler's Write method is called. This tells the toolkit what the 'current' ID is.  We then 'tag'  the polyline by calling the 'Tag' method of the BBaseOpcodeHandler class.  (This can be done explicitly by calling 'Tag' within a custom handler, or can be done by asking the toolkit to automatically Tag all HSF objects, as we'll discuss later.).   Again, this adds an entry to the HSF toolkit's internal tag-table; the entry contains a pair consisting of the ID and a Tag value.
2.  Anytime after the polyline has been tagged, we can call the BStreamFileTookit::KeyToIndex method, which, given the ID of the polyline, returns to us the file Index of the polyline. 
3.  Since we know that our custom data associated with the polyline had an ID of 50, we can now store our own mapping between the custom data and the polyline's file Tag returned in Step 2.  Specifically, this means that we store the following pair of data somewhere:
[50, <the value returned from KeyToIndex>]
The 'somewhere' could of course be in the HSF file.  This would probably be handled by a custom TK_User_Data object.    However, the pairs of mapping data could be external to the HSF file as well; perhaps it is desirable to store it in another user-specific file.  The main point is that after reading the HSF objects back in, we will want to retreive the mapping data in order to rebuild a runtime mapping between our custom data and new objects in our graphical database.
 
Reading:
1.  During the reading process, we first read in the polyline object, and map them it custom application data structures in the overloaded Execute method of our custom polyline opcode handler. (Let's assume that the example polyline discussed above has a new ID of 500 in our app data structures.)  We must also call BStreamFileToolkit::SetKey(500) in the Execute method so that any following Tag opcode will result in a properly generated tag-table entry.  After the polyline is read in, the toolkit notices that it was tagged, and adds a new entry to the internal tag-table which contains a pair consisting of the new polyline ID (which was set on the toolkit via the call to SetKey) and the Tag value. 
Note:  The Execute method should call BStreamFileToolkit::SetKey(ID)  for any objects that might be tagged, which include segments and geometry. 
2. We retreive the mapping data that was output in Step 3 of the Writing process. 
3.  We call the BStreamFileTookit::IndexToKey method, and pass it the index value that was associated with our custom data value of 50.    This returns to us the new ID of the polyline stored in our application data structures, and we can now associate the custom data (id =50) along with the polyline (id = 500) that is stored in our application data structures. 
Automatic Tag Generation
Tags can be automatically generated during the writing process by setting the TK_Force_Tags write option:
    int   flags = TK_Force_Tags;   
    BStreamFileToolkit * tk = new BStreamFileToolkit;  
    BStreamFileToolkit->SetWriteFlags(flags); 
      
All HSF objects which can have tags (such as geometry,segments and includes) will be tagged during writing, and an entry will be added to the HSF toolkit's internal tag-table for each object, which is a pair consisting of the database ID and a Tag value.    Note that during export of the opcode, we must call BStreamFileToolkit::set_last_key before the Write method is called.
 
 
Manual Tag Generation
To manually instruct the toolkit to Tag specific objects in the HSF file (and add an entry to the HSF toolkit's internal tag-table which is a pair consisting of the database ID and a Tag value), the default opcode handler for the object-type of interest must be overloaded so that we can Tag the object within an overloaded Write function.  For example, if we only wanted to tag HSF polyline objects, we would instruct the toolkit to use our custom polyline opcode handler:
    tk->SetOpcodeHandler (TKE_Polyline, new My_TK_Polyline    (TKE_Polyline)); 
     
      
TK_Status My_TK_Polyline::Write(BStreamFileToolkit &tk)     
   { 
      TK_Status status;  
   // write out the default object 
      if (m_stage!=-1) 
         status = TK_Polyline::Write(tk);  
   // we are in complete with writing the default object,    so we now 
      // tag it 
      if (m_stage==-1) 
         status = Tag( tk, -1 );  
   return (status); 
   }  
As discussed previously, we must call BStreamFileToolkit::SetKey before the Tag function is called.
 
Note: the toolkit automatically tags shell objects (TKE_Shell) during export.
 

5. Maximizing Performance

5.1 Rendering

5.1.1 Scene-graph organization

An HSF file is essentially an archive of a HOOPS/3dGS scene-graph.  Even if HOOPS/3dGS is not used as the graphics system for rendering,  the organization of the scene-graph inside the HSF file can affect rendering performance.   Optimal scene-graph structure is covered in the 3D With HOOPS book, and is also discussed in the articles at developer.hoops3d.com   Critical areas include keeping segments to a minimum, organizing the scene-graph based on attributes rather than geometry, using 'shell' primitives whenever possible to represent tessellated data, and making sure that the shells are as large as possible. 
Keep in mind that a scene-graph is meant to serve as an optimal organization of the graphical information, rather than higher-level application information such as 'assemblies', 'parts', etc... Structuring the scene-graph based on the organization of higher-level application data-structures, while perhaps convenient, can severely compromise rendering performance and memory usage inside the application which is doing the reading.  However, the HOOPS/Stream Toolkit's range of HSF opcode objects and customization facilities makes it easy to associate custom (non scene-graph) data with the scene-graph objects and store them in the HSF file, or store the external to the HSF file (perhaps as XML data).
 

5.1.2 Shell organization

The TK_Shell opcode-handler provides support for defining a shell via tristrips.   Drawing shells using tristrips maximizes rendering performance.  Therefore, shell objects should be exported via tristrips if they are available.  This is done by formatting the facelist pased into TK_Shell::SetFaces to contain tristrips, and setting the TKSH_TRISTRIPS bit in the shell's suboption variable using TK_Shell::SetSubop.  For example:
 TK_Shell::SetSubop( TKSH_TRISTRIPS | GetSubop() ); 
     
      

5.1.3 Polygon handedness

Polygon handedness is a basic vector graphics concept.  The specifics are covered in the 3D With HOOPS book, but in general, the existence of a polygon handedness setting for an object enables an application to render that object using backplane culling.  This typically  results in a signficant increase in rendering performance.
If a TK_Shell object is being exported to an HSF and can (or should) have a handedness defined for its faces, it is critical to make sure that the handedness attribute is exported to the HSF file.  This is achieved by using the TK_Heuristics object to export the TKE_Heuristics opcode.
This is important because:
1.  The reading application may not be able to determine what a proper handedness is for the shells
2.  Even if the reading application can determine a proper handedness setting, the scene may look incorrect if the setting is made in the application and wasn't made at the time of file export (and hence stored with the shells).  This is because the HOOPS/Stream Toolkit will explicitly export compressed normals during the writing phase, and it is possible that these normals won't be consistent with the handedness setting made in the reading application. 
Note:  if the handedess attribute is going to be exported for a shell or group of shells, it is important to make sure that all the faces in the shell are all defined with a consistent point ordering.  Otherwise some faces will be 'backwards', and the object will have holes in it if a viewing application renders the object by relying on the existence of a handedess setting to perform backplane culling.