Drivers Tenasys

Home → Advanced Configuration → Printer Friendly Version

Notes of various techniques for advanced configuration of the INtime platform.

TENASYS CORPORATION Demand for greater integration of services in advanced compute systems drives the need for greater software content. Regardless of the proliferation of heterogenous hardware architectures, the means to impart application command and control with dete. Downloads INtime is a product offered by TenAsys. It is a real-time kernel expanding a standard Windows PC system by adding hard real-time features according to industrial requirements. The INtime driver offers access to netX-based hardware (e.g. CIFX 50) with the same functionality as Windows and runs in a separate address space. Windows Extension (ntx) API. Part of the INtime CIF Driver package is a replacement DLL named cif32dll.dll. This DLL replaces the one provided by Hilscher GmbH with their Windows driver. Once installed, the TenAsys cif32dll.dll DLL routes calls from Windows applications such as SyCon and MsgDbg (both from Hilscher GmbH) to the INtime CIF driver. TenAsys® partners with I/O & Hardware providers Upgraded cifX PC-card drivers are available to support INtime® version 6.3. With the ECI (Embedded Communication Interface), HMS offers a free driver for its active and passive CAN interfaces under INtime.

  • 1. Visual Studio tips
  • 2. INtime Memory Configuration
  • 3. Windows boot configurations for INtime
  • 4. Hardware Platform Configuration
  • 5. Network (legacy) driver configuration
  • 6. Advanced Network Configuration
  • 7. Using PCMCIA, PC-Card and ExpressCard interfaces
  • 8. System clock adjustments
  • 9. USA Daylight Savings Timezone Changes from Spring 2007

Q: Can I edit and compile my Visual Studio solution even if no INtime SDK is installed?

A: Yes you can. By creating a second solution file you can edit and compile your solution without having to install an INtime development kit. Only the debugger requires the INtime development license to be present.

To open the source of an INtime project without the HWKEY:

  1. Create a copy of the sln (solution) that includes the intp (INtime project).
  2. Open the copy and delete the intp project.
  3. Add the vcproj project that was a subproject of the intp to the solution.

You can now view and compile, but cannot debug, the real-time code in Visual Studio.

Note: this is NOT necessary in Visual Studio 2012 and later where the INTP project has been replaced with the INtime 'Platform' type. Projects may be edited and compiled without the need for a license to be present; only the use of the debugger requires a license in a Visual Studio 2012 and later INtime project.

There are two aspects to INtime memory configuration: physical memory and virtual memory.

Physical memory refers to the actual RAM available on the PC. The INtime kernel and applications must share the RAM with the Windows OS. When you use the INtime configuration panel to configure INtime kernel memory, you are reserving a portion of the PC RAM for use by INtime. All physical memory used by INtime applications for code, data, stack or dynamically allocated buffers or OS objects comes from the reserved kernel memory pool.

If you configure INtime kernel memory from Windows Non-paged pool, the Windows OS manages all the RAM on the PC, but a (static) piece is allocated from Windows memory for exclusive use by INtime. If you configure Excluded INtime kernel memory, the memory is removed from Windows' control completely. In either case, you must leave enough physical memory available for Windows for it to function correctly. 'Enough' depends on your PC configuration and what Windows drivers, applications and other software must run.

Excluded kernel memory is usually the best configuration for INtime applications that need large amounts of memory. You can usually configure up to ~2GB of Excluded memory for the INtime kernel (with 4GB of RAM installed). The exception is 64-bit Windows configurations with more than 4GB of RAM installed. Up to version 6.1 of INtime, exclude from memory is not the preferred option. In that case, it is better to use Windows Non-paged pool memory for INtime. With sufficient RAM installed (say, >=8GB total), you can usually safely configure up to ~1.5GB of Non-paged pool memory for the INtime kernel. Since version 6.2, excluded kernel memory is very well supported via PAE mode (available in extended virtual and physical memory configruation).

Configured INtime kernel physical memory is shared among the INtime kernel and applications. When and how you allocate memory buffers in your INtime application can influence success. It is generally better to allocate large memory areas used by your application as early after kernel start up as possible and then leave the memory allocated (rather than allocating, deallocating, then reallocating later). This helps avoid memory fragmentation problems that often develop as applications run.

Virtual memory refers to the addresses used by your INtime application process. When physical memory is allocated, it is assigned virtual addresses (by the INtime kernel) that are used by your application to access the memory. There is a total of 4GB of virtual address space available to the INtime kernel. This 4GB space is shared by the kernel and all applications and approximately 2GB of the INtime virtual address space is available for use among all INtime applications, as long as extended memory mode is not used. With XM mode enabled, every XM process gets its own 4GB virtual memory space.

The virtual address space that is reserved for your INtime application process is configured with the Virtual Segment parameter. The Virtual Segment (Vseg) parameter is configured in the Property Pages of your INtime application VS project. The default size is 16 MB, that is, 16 MB of address space is available for your application to hold all code, data, and stack, as well as memory areas that are dynamically allocated by your application.

Unfortunately, since each PC configuration and set of applications is different, there is some trial and error involved in optimizing your INtime physical and virtual memory configuration for your particular circumstances. The installed default configuration is a minimal configuration that is meant as a starting point.

As a general debugging guideline, if you see an E_MEM error, it means that there is insufficient physical memory available; if you see an E_VMEM error, it means that there is insufficient virtual (Vseg) memory available. Adjust your configuration accordingly.

INtime Versions 6.0 and 6.2 introduced new configuration options for the memory manager providing for several use cases on a host. Understanding the pros and cons is important to select the best memory configuration for your needs. There are two different memory configurations, a configuration for each process and a configuration for each node.

Process memory configuration

The memory of each process can be configured in several ways. One is at compile time, by selecting 'Use XM mode' or 'Do not use XM mode' under INtime Properties in the project properties. XM mode affects how the process is mapped into the memory during load time. Non XM mode is faster than XM mode on many hosts because it does nothing extra. However, Atom processors are faster executing in XM mode. Why is everyone excited about using XM mode? As described in the manual, an XM mode process has its own 4 GB memory segment (full 32-bit address range minus some administration overhead results in 3.75 GB). If some processes need a lot of memory, XM mode is much better suited than non XM mode where the single 32-bit virtual address range (limited to 4 GB) must be shared with all of the processes on the node.

The tradeoff for XM mode is in context switches (thread switches between processes, and interrupt handling) which cause some additional cache reloading of the CPU, making context switches slower. Knowing this, the decision for the setting for XM mode could be quite easy.

The second place to set XM mode is in the loader. The default for the loader is to use the compile time setting, XM or non XM mode. Usually the default is fine, but it could be useful to use the overwrite XM mode for some hosts. If you are using a prebuilt process, which was compiled for non XM mode, the loader can be told to load the process in XM mode. This can improve execution time on Atom and some other processors. In XM mode the segments of each process (code, stack and data) are zero-based (the descriptor has a zero base address). Some CPU's execute their instructions for accessing the memory faster, if the descriptor is zero.

Node memory configuration

The process memory configuration is only available if the node itself has enough physical memory. Nodes can be configured for 'No extended memory', 'Extended Virtual Memory', or 'Extended Virtual and Physical Memory'. If 'No extended memory' is selected the kernel cannot handle XM mode at all. If you have moderate memory demands, this would most probably do (except for Atom cores). A process compiled to use XM mode will still load as long as the required memory is available. So the kernel does not need to be configured for extended memory just because a process is compiled to use XM mode. This is the mode, which is most compatible with older INtime versions.

If the demand for memory increases, or if using an Atom CPU, the next level is Extended Virtual Memory. This enables the kernel to handle XM mode processes. All memory continues to be taken from the Windows non-paged pool and total memory available to a node depends on the OS, but far less than 4 GB.

To gain access to more memory, or to be independent from the Windows non-paged pool, support for PAE (Physical Address Extension) mode was introduced in INtime 6.2 with the option for Extended Virtual and Physical Memory. PAE is a HW feature, present in most CPUs. It translates a 32-bit virtual address to a 64-bit physical address. So a process with 32-bit addressing can reside in physical memory far above the 4 GB limit. PAE enables bigger memory spaces for processes and almost unlimited memory per node, for demanding applications. The drawback of this feature is that some applications might need to be reviewed. If the physical memory address is used (e.g. for DMA access) the pointer is now 64-bit and different library functions have to be called. It is always a good idea to give the HW drivers some memory below 4 GB and only force the memory demanding processes to allocate memory above 4 GB.

There are advantages to using the new memory manager in 6.2 even if the extended address modes are not used. It makes more efficient use of both virtual and physical memory compared to older versions of INtime so that the effective limit of usable physical memory is raised from just below 2 GB per node to nearly 4 GB per node.

The ability to break the allocation of memory to INtime into a number of separate areas instead of a single allocation for all memory can make it more reliable to allocate large amounts of memory for INtime from the Windows non-paged pool, even without enabling the extended address features.

Conclusion

With this knowledge in mind, the most efficient way to address memory should be using non XM mode. XM mode and Extended Memory are extra and can be used, if needed. But as already explained, there are some exceptions for this. The extra modes may be preferred for Atom cores or by high memory demands. Another use case is for a CPU with many cores assigned to INtime. For each node to have enough memory, PAE mode may be needed with memory excluded from Windows (where memory is reserved from the top of available physical memory). So for critical applications you need to do your own measurements and try to find the best configuration on your specific HW. The good news, there is some support from INtime for this. Please have a look at INscope to see your idle times or try to instrument your code with GetRtTimestampInfo() and calculate the time for your most critical execution path. Additionally thread accounting can be enabled and execution times can be read with INtime Explorer or programmatically with GetRtThreadAccounting().

In some circumstances it is desirable to retain the ability to boot the Windows platform but return all cores to Windows, where INtime has required one or more dedicated cores.

In order to achieve this, create a second boot configuration and effectively disable INtime by returning any reserved cores to Windows. This assumes that the system is running Windows Vista or later and has a Boot Configuration Database (BCD).

Run the bcdedit command to replicate the current configuration (an elevated command prompt is required), then delete the NUMPROC value in the new configuration. Delete the TRUNCATEMEMORY value to free up reserved memory if the 'Reserve memory from Windows' option was used .

With multiple Boot Loader options, each time Windows is started, a screen to 'Choose an operating system' prompts for a choice. After login, INtime will detect the BCD change and prompt for a reboot. Ignore this prompt while in 'no INtime' mode. The following transcript is from Windows 10 (notes are marked as <<<NOTE:):

Although Notebooks (or any computer subject to sleep) are not a good candidates for a Real-Time system host,
many developers have INtime installed on a notebook. When the notebook enters the sleep state while
an INtime kernel is running, it can cause problems.

This is a registry modification that handles the shutdown of INtime nodes when going into the sleep state.

In REGEDIT:

ComputerHKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesRtifParameters
Flags - Set bit 5 (for example, if Flags is 0x00000005 (5), change it to 0x00000025 (37)).
You must reboot the PC for this setting to take effect.


This changes the sequence that rtif uses to shutdown the INtime nodes.
It may not work for all notebooks, but has worked for many.

'Jitter' is the variation in the time of a repeated event; the difference between when the event was supposed to occur, and when it actually occurred. Within a real-time software system jitter can be minimized by careful design and knowledge of how the system works and behaves. Only a real-time operating system (compared with a general-purpose operating system) allows you to create such a predictable system. However, jitter may also arise due to the interaction between the software and the hardware platform.

INtime provides a tool for measuring the jitter in the system clock interrupt (the 'INtime Graphical Jitter Display'). The system clock interrupt is the highest-priority event in the INtime system, so any jitter in the clock event gives a measure of the quality of the platform before you add your software system to the platform. If clock jitter is poor then the rest of your system will behave poorly in this respect also.

The best way to measure jitter is by use an external clock source, such as an independent counter/timer card, with a non-variable frequency source. Unfortunately, TenAsys is unable to provide such a solution for 'jitter' measurements because it requires hardware that is specific to your system.

Even before any jitter measurements are made the Platform Evaluation Tool (shipped with INtime SDK) can give a first overview of some possible sources of trouble. It checks for some of the settings described further down in this article.

Drivers Tenasys

Jitter is typically caused by one or more of the following:

  • Clock states (C-state): They are used to disable the clocking of parts of the CPU. The use of C-states introduces non-deterministic timing to get up to full speed operation depending on the state which was valid before the event occurred to start your execution code. C-states should be turned off in BIOS.

TenAsys - 35 Years Of Pioneering Innovation In Real-time And ...

  • System Management Mode (SMM): While the CPU is in SMM (entered via an SMI, the non-maskable System Management Interrupt), it will not respond to external interrupts, contributing to sometimes significant interrupt latency. See http://en.wikipedia.org/wiki/System_Management_Mode for more info.

SMM is used for many features, including legacy power management and PS/2 keyboard emulation (for operating systems that need a standard PS2 keyboard). Disable PS/2 emulation, APM (Advanced Power Management, a legacy BIOS-based power management scheme), and all other SMM features in the BIOS. SMM can also be used to support some special features, such as sensing and changing the video output from the LCD panel to an external CRT on a laptop, detecting lid closures and base station insertion and removal on a laptop, and changing the speed of cooling fans (APM). It has been used in some cases by BCDEDIT, which is used by Windows itself. To be sure there is no interference between Windows updates and INtime, run BCDEDIT from an elevated command prompt while the Graphical Jitter Tool is executing. If there are many spikes during BCDEDIT, the BIOS is using SMI to access primary boot partition. Please contact your HW vendor to get a BIOS which does not use SMI for this purpose.

  • Hyper-Threading Technology: Although overall system performance might be increased by enabling Hyper-Threading, it can contribute to system latencies due the shared pipeline architecture of this technology. With multi-core technology the benefit of disabling Hyper-Threading and dedicating one core to INtime is tremendous. Typical jitter measurements in systems with this configuration are extremely low.
  • HW-Controlled Performance (HWP): This feature controls several areas of CPU automatically and turns on or off some regions in HW. As always, switching HW on causes some delay. It is good advice to disable these features in BIOS, if supported. The bad news about HWP, if it is turned on, it cannot be turned off again, and in the case of recent releases of Windows 10 versions the Processor driver enables this if not disabled by the BIOS configuration. In this case the only recourse may be to disable the processor driver altogether (intelppm.sys).
  • Windows Power Management: The various power management strategies used by Windows can impact the CPU response times to interrupts (measured as increased interrupt latency), especially switching the CPU into and out of different power states as a function of load. This can be an issue for dedicated multi-core systems as well as 'shared' configurations, depending on the CPU power design.

There can be a long delay when exiting the halt state on some systems. For best performance disable all power management features. Note that disabling power management in the BIOS does not necessarily disable it in Windows. Windows uses its own power management. Windows events that can cause a power management state change include entering the screen saver.

See http://en.wikipedia.org/wiki/Power_management for some additional pointers to information about the various power management schemes that can be found on x86 PC systems.

  • I/O devices: We have seen some video and sound device drivers contribute to system latency. These include the cheaper video controllers such as from SiS and older controllers from S3, but such behavior is not limited to just these vendors. Some video drivers behave badly on purpose to increase video performance at the expense of the rest of the system. This is also true of some sound controllers.

Install a generic video driver or change the video hardware acceleration features. Sometimes turning off video acceleration is the right thing to do, and sometimes enabling full video acceleration is the solution. Likewise, try installing a generic sound driver or removing or disabling the audio driver. If removing the audio driver prevents certain applications from running, you can install a virtual audio device driver to satisfy the application's need for an audio device, for example 'Virtual Audio Cable' from http://software.muzychenko.net/eng/vac.html.

Some newer Intel CPUs have graphics support included. INtime takes care to configure these chips properly for you, if it is supported by the HW. Be sure to use an INtime version which already has support for the CPU used (e.g. do not use INtime version 5 on Skylake processors).

  • Core cross talk: On some lower end CPUs a crosstalk from core 2 to core 3 has been seen. If the jitter is in a critical range, it could help to give Windows access to core 0 and core 1, assign core 2 to INtime, but do not use it and assign and use core 3 for INtime. Sometimes the jitter could be improved by several 10s of microseconds.
  • SpeedStep Technology: SpeedStep does not impact interrupt and timer latency so much as it adversely affects the way jitter measurements are made with the INtime 'jitter' program. In order to make the INtime 'jitter' program usable on all processors we use the Pentium 'Time Stamp Counter' in the CPU to measure the time at which the timer interrupt is handled (see the RDTSC instruction: http://en.wikipedia.org/wiki/RDTSC).

To minimize the impact of SpeedStep Technology, disable the SpeedStep features in the BIOS and in Windows (set your system to run a maximum frequency and turn off thermal management, sometimes called APM, in the BIOS). When SpeedStep is enabled the CPU clock frequency can be variable and jitter measurements can be unreliable.

There are some Windows utilities that can be used to either modify or monitor the SpeedStep behavior of Windows, such as the RightMark CPU Monitor (http://cpu.rightmark.org/). These utilities are generally only of value when INtime is set to operate in in a 'shared' mode (always the case on a uni-processor system and the default state on a multi-core processor for Windows 32bit).

Use these utilities with caution! They can be used to modify the thermal management as well as load-based algorithms on your machine. Allowing a system to overheat can result in permanent damage to your system!

Using the above utilities on a multi-core processor, when INtime is configured for dedicated operation, may not be of value because SpeedStep is generally core-specific. Thus, forcing the Windows core to operate at maximum speed does nothing for INtime when operating on dedicated processor cores. Note also that these tools cannot 'see' or manipulate the dedicated INtime core in a dedicated multi-core system. INtime itself takes care of those settings on the cores it uses in dedicated mode.

The following instructions provide steps to collect information to identify potential sources of platform jitter.

To run the platform diagnostic tools, start the INtime kernel, unzip the attachment to a Windows folder, open a Windows command prompt, CD to the folder containing the tools and type the following commands:

ldrta cpuid.rta

ldrta msr.rta -a -all

ldrta smi.rta -a -d

Save all the output text of each tool and return it to INtime support.

This information refers to drivers in the legacy network stack, which was shipped with INtime versions prior to 4.0, and is still shipped with current versions as 'legacy networking'.

The standard options for the Gigabit Ethernet drivers (e1000.rta, e1ge.rta. bcomg.rta, r8168.rta) are as follows:

OptionMeaning
debug=xSet the debug level and mask to 'x'
listLists all of the interfaces which the driver recognizes, along with the inst number for the interface.
inst=nSets the instance number for the interface. The first interface recognized by the driver is instance 0, the second 1, etc.
speed=auto|10|100|1000Sets the line speed of the link in Mbits/s. Default is 'auto'.
duplex=full|halfSets the duplex on the link. Default is 'auto'.
pollSet polling mode for the driver (not available in all drivers1). In this mode the driver polls the status register instead of relying on the interrupt from the device.
pollpri=nSets the priority of the polling thread. Default is priority 128.
ifname=xyzOverride the default interface name. The default name is ethN where N is allocated as the drivers are started.

Special options for the e1000 driver are as follows:

txdescs=nSet the number of hardware transmit descriptors to n (default 256)
rxdescs=nSet the number of hardware receive descriptors to n (default 80)
fc=disabled|rxonly|txonly|enabled|defaultSet the flow control (default is hardware-dependent default)
txdelay=nSet transmit interrupt delay to n microseconds (default is 64)
txabsdelay=nSet transmit absolute delay to n microseconds (default is 64)
rxdelay=nSet receive interrupt delay to n microseconds (default is 128)
rxabsdelay=nSet receive absolute delay to n microseconds (default is 128)
itr=nSet interrupt throttling delay to n (default is 8000)

The 100 Mbit drivers (eepro100 and rtl8139) are slightly different:

options=0x10Force full-duplex mode
options=0x20Force 100 Mbit speed

To use the TenAsys Virtual Ethernet device in a network bridge on a system that is part of a Windows Domain, an administrator must change the group policy setting to allow this. Please note that the IT administrator should be experienced and this article is one way to allow these settings. TenAsys Corporation cannot be held liable for any compromised security caused by allowing network bridging on a domain. The IT professional may have an alternative that better suits your company.

These steps have been validated on Window Server 2003 for Small Business Server version 5.2.3790 SP 2 Build 3790, but should work for any domain controller.

  1. Open the Group Policy Management console in MMC.
  2. Select Start>Run.
  3. Enter 'gpmc.msc' and click OK.
  4. Browse to 'Forest>Domains>[Domain name]>[Small Business]* Client Computer'.

    * This name may be different when using the standard or enterprise version of Server 2003.

  5. Click the 'Setting' tab in the right window.
  6. Expand 'Network/Network Connections' by clicking the 'show' link.
  7. Right click 'Prohibit installation and configuration of Network Bridge on your DNS domain network'.
  8. Select 'edit'. The 'Group Policy Object Editor' launches.
  9. Under 'Computer configuration' in the right pane, browse to 'Administrative Templates>Network>Network connections'.
  10. Double click the 'Prohibit installation and configuration of Network Bridge on your DNS domain network'
  11. Select the 'disable' radio button and press OK.

There are some restrictions in using add-in cards in laptop systems, due to architecture features of the way these interfaces are implemented.

There is a difference between PCMCIA and PC-Cards, in that the PCMCIA implementation is an extension of the ISA bus. INtime does not handle such devices well, because the bridge has to remain under the control of Windows. The INtime Device Manager does not allow for thre transfer of ISA devices to INtime.

PC-Cards have the same form-factor but are implemented electrially as PCI devices which means that we can assign them to INtime. However, bridge implementations mean that in almost all cases it is not possible to transfer control of the device interrupt to INtime because the interrupts are invariably shared with the bridge device, and that has to remain under the control of Windows. We do not allow the sharing of an interrupt line (IRQ) between an INtime device and a Windows device at the same time. In such cases the device have to be polled. This is the mode in which such Ethernet cards have been used by ETAS in the past.

If Windows cannot find a device driver for it, this may prevent us from passing it to INtime, because a problem in the WDM interface under XP, and sometimes in Windows 7 also. To work around this issue, manually assign our rtdrm.sys device driver to the device before passing it to INtime. This can be done like this on Windows 7 (there is a similar sequence for XP, but with a different path for the device driver):

  • In Windows Device Manager, right-click the device and select Update Driver Software...
  • Select 'Browse my computer for driver software'
  • Browse to the following path on Windows 7: C:ProgramDataTenAsysINtimedrivers. On XP, use c:Documents and settingsAllUsersApplication DataTenAsysINtimedrivers.
  • Click Next and continue to the end of the driver installation sequence and reboot if prompted
  • Now you should be able to pass the device to INtime using the INtime Device Manager.

So in short, when using a PC-Card network interface, pass it to INtime 'non interrupt, or MSI' and load the corresponding INtime driver in polled mode.

For Express Card, the situation is simpler and the device behaves as a normal PCI Express device, and should also support MSI, which means you can use it in interrupt mode.

On an ACPI system, the INtime kernel clock uses the APIC timer (driven by the FSB clock) to generate its low-level timer tick (default value of 500 microseconds). (On older systems, mostly pre 1998, this tick is generated by taking over the legacy PIT from Windows.) Different motherboard designs have different levels of accuracy in the circuits that drive the APIC timers, so a means to fine tune the kernel clock tick period has been provided.

To adjust the INtime kernel tick period, add a DWORD value named RTClockAdjust to the registry. For INtime 3.x, the value is found here:

HKLMSOFTWARETenAsysINtimeRtKernelLoaderRtKernel

For INtime 4.0 and later, for a given node, this value is found here if using a 32-bit version of Windows:

HKEY_LOCAL_MACHINESOFTWARETenAsysINtimeDistributed System ManagerConfigured Locations For Realtime NodesIndividuals[NODENAME]RTKernel

Or here if using a 64-bit version of Windows:

HKEY_LOCAL_MACHINESOFTWAREWow6432NodeTenAsysINtimeDistributed System ManagerConfigured Locations For Realtime NodesIndividuals[NODENAME]RTKernel

(replace the string [NODENAME] with the name of the node where you want this adjustment to be made).

This value is added to the number used to program the APIC timer that drives the INtime kernel tick. While it is stored as an unsigned 32-bit quantity, it is interpreted as a signed 32-bit quantity, so values less than or equal to 0x7ffffff will increase the timer count (lengthen the timer period and slow down the timer interrupt) and larger values (e.g., 0xfffffff0) will effectively be subtracted and decrease the timer count (shorten the timer period and speed up the timer interrupt).

The APIC timer used to drive the INtime kernel tick is driven by the FSB (front side bus) clock. Typical frequencies are 100Mhz, 133Mhz, 200Mhz, and 333Mhz. If you inspect the Windows System Event Log on your INtime PC, you will see that the rtif driver posts a message at boot time reporting what value was measured as the APIC timer frequency. For a 500 microsecond kernel tick rate, INtime divides the timer frequency by 100 to get a 10 millisecond rate, adds the RTClockAdjust value, and then divides the result by 20 to get a 500 microsecond rate. This is the value that is programmed into the APIC timer. (Other low-level kernel tick values are derived by changing the divisor in the last division appropriately.)

An equation you can use to calculate a value for RTClockAdjust is:

RTClockAdjust = (BUS_CLOCK / 100) * ADJUST_RATIO

Where:

Intel

BUS_CLOCK is the nominal frequency reported by rtif, in the event log, and ADJUST_RATIO is the ratio by which the kernel tick is either too fast or too slow. ADJUST_RATIO is a negative number for those instances where the INtime time of day runs too slow (the INtime TOD is falling behind).

For example, if your INtime clock runs five minutes slow in one day, then it is

5 / (60 * 24) = 0.34722%

too slow.

If your FSB clock frequency was 333333333 Hz, you would make the following adjustment:

RTClockAdjust = (BUS_CLOCK / 100) * (- 0.0034722)

RTClockAdjust = 3333333 * (- 0.0034722)

RTClockAdjust = -11574

Or, in hex, RTClockAdjust = 0xffffd2cb

You can edit the registry to allow the INtime kernel to use an arbitrary clock tick period. Note that only the tick period values available through the INtime configuration tool have been tested; other values may produce undefined results.

To configure a non-standard INtime kernel tick period, open the regedit command and navigate to the key:

HKLMSOFTWARETenAsysINtimeDistributed System ManagerConfigured Locations For Realtime NodesIndividualsNodeARTKernel

On 64-bit Windows:

HKLMSOFTWAREWow6432NodeTenAsysINtimeDistributed System ManagerConfigured Locations For Realtime NodesIndividualsNodeARTKernel

(If your INtime node is named something other than NodeA, substitute the appropriate name when navigating to this key.) Edit the MicrosecondsPerRtKernelTick DWORD value. You can set it to the desired tick period in microseconds. Make sure that bit 31 of the value is set to 1. For example, to configure an INtime kernel tick period of 10 microseconds, enter the value 0x8000000a. If you do not set bit 31, the value you enter will be rounded to the nearest officially supported kernel tick period.

Depending on the platform and configuration, setting a very short clock period can cause performance problems when the execution time it takes to service the clock (and all activities that are scheduled by the clock) is a significant fraction of the clock period.

Starting in the spring of 2007, daylight saving time (DST) start and end dates for the United States will transition to comply with the Energy Policy Act of 2005. DST dates in the United States will start three weeks earlier (2:00 A.M. on the second Sunday in March) and end one week later (2:00 A.M. on the first Sunday in November).

This affects TenAsys products in these ways:

  • INtime 3.0 and later (and iRMX for Windows based on that product) requires only the Windows update (see Microsoft article 'Preparing for daylight saving time changes in 2007'). By default, the INtime kernel timezone is configured from the Windows timezone at the time the INtime kernel loads.
  • Older INtime products configure the timezone by modifying the TZ parameter configured during installation. The TZ parameter for the USA timezones with daylight savings time must be modified as follows:
    EST5EDT becomes EST5EDT,M3.2.0/2:00:00,M11.1.0/2:00:00
    CST6CDT becomes CST6CDT,M3.2.0/2:00:00,M11.1.0/2:00:00
    MST7MDT becomes MST5MDT,M3.2.0/2:00:00,M11.1.0/2:00:00
    PST8PDR becomes PST8PDT,M3.2.0/2:00:00,M11.1.0/2:00:00

TenAsys Corporation produces software for hard real-time virtualization solutions to support mixed workloads on multicore PC platforms. The company?s INtime RTOS family makes use of embedded virtualization technologies to partition platforms, permitting multiple, diverse workloads to run simultaneously, independently, on the same platform.
The INtime RTOS has a fully functional pre-emptive real-time OS kernel. It runs on single-core, hyper-threaded and multicore Intel Architecture PC platforms, and supports two compatible usage configurations; INtime Distributed RTOS where it runs as a stand-alone RTOS and INtime for Windows where the RTOS runs alongside Microsoft® Windows®, complete with a comprehensive set of functions to support communication and interaction with Windows applications.
INtime users can support several fieldbus/Real-Time Ethernet technologies using a Hilscher PCI PC-card and the Hilscher NXDRV-INTIME device driver. Supported fieldbus/Real-Time Ethernet technologies include; AS-Interface, CANopen, ControlNet, DeviceNet, EtherCAT, EtherNet/IP, InterBus, POWERLINK, Modbus, PROFIBUS, PROFINET and SERCOS.
Customers worldwide have trusted TenAsys RTOS products to provide reliable deterministic control in a wide array of mission-critical applications since 1980. Applications span multiple industries including: medical, telecommunications, industrial control, robotics, testing and measurement, power management and military applications.
TenAsys has offices in the USA, Europe and Japan.

Technology Filter

DRV-INTIME

CIF Device Driver for INtime

Active

NXDRV-INTIME

CIFX/netX Device Driver for INtime

Active