Contact Us

What is the custom software package for the Degraded Communications Emulator for the DRC 2015 Finals?

Read about the custom software package that InterWorking Labs is creating for the DARPA Robtics Challenge Finals in June 2015.

Read more ...

Setting Up a DNSSEC DNS Master With BIND, SELinux, Chroot Jail, and Dual Horizon (Dual View) Access

DNSSEC - Domain Name System Security - is not trivial.  In this article we talk about our experiences setting up DNSSEC using Bind 9.x on a Linux system with SELinux and with the whole thing in a chroot jail.

Read more ...

Should We Trust Iperf Bandwidth Measurements

We've noticed how easy it is to misunderstand the bandwith numbers generated by the popular tool Iperf.

So we've written a short white paper on the subject:

Does IPERF Tell White Lies?

Chris reviews: Cisco UC320W

It's time for a new phone system at the office- what should one do?  InterWorking Labs faced the dilemma of what to do about our internal phone system.  We ended up selecting a Cisco UC320W system.  This article explores our requirements, explores why we went with an "Asterisk-in-a-box" solution, and considers the features of the Cisco product.

Read more ...

Possible flaw in JQuery UI Spinner Widget (Version 1.10.3)

There appears to be a flaw in the JQuery UI Spinner widget (version 1.10.3, July 2013).

The problem is that the value actually displayed is different than the value that has been set.  The value displayed is affected by the "min" and "step" values.

This note explains the issue in more detail and points out an undocumented requirement for use of the widget.

Read more ...

Maxwell G -- Corrected Script to Start a VNC Server

Many users who place their Maxwell G's into a lab find it inconvenient to operate the Maxwell G in the lab itself.  So they export a desktop onto a Windows or Linux desktop computer in a more comfortable location.

The best tool for this is VNC.  The Virtual Network Computer (VNC) uses a client/server framework.  The VNC viewer/client executed on the local computer (usually your desktop, laptop, or mobile device).  The VNC viewer connects to the VNC server that runs on Maxwell G.  The VNC server transmits a duplicate of the Maxwell G's display to the VNC viewer on your desktop/laptop/mobile device.  When you type commands they are transmitted to and executed on Maxwell G.

VNC requires that the user login to a command prompt (usually via SSH or via a locally attached keyboard/monitor) in order to initiate a VNC server on Maxwell G. The server, when started, announces address and port/desktop number information that needs be fed to the VNC client.

That server can then be accessed from a VNC client using that number information.

Getting VNC started can be somewhat difficult, particularly if there have been previous VNC sessions.

InterWorking Labs provides a shell script that will clean out an stale VNC server information and start a new, clean VNC service.

That script is in a file named /home/maxwell/bin/startvncserver

If you have any difficulties with VNC execution on Maxwell G, please send a note to and we will help diagnose the problem and provide more detailed instructions.

Maxwell G & Maxwell Pro -- Trying out the new Maxwell G and Maxwell Pro -- looks like I got the Russian version?

During the power-up sequence on some versions of Maxwell Pro the display of text suddenly (but temporarily) turns into strange, Cyrillic looking characters.  Several users have wondered about this.

Read more ...

Mini Maxwell -- UDP Filters on Two Different Bands

One or our users had some difficulty setting up the filters in Mini Maxwell.  The user wants to set up two filters into two bands, each band having different impairments;  The user is wondering what happens to packets that match both of those filters.

Read more ...

Mini Maxwell -- How do I Control Mini Maxwell from my own script?

A user asks if he/she can send commands to Mini Maxwell without going through the web page graphical user interface.  The answer is "yes"; Mini Maxwell can be controlled from a Python script running on another computer.

Read more ...

Mini Maxwell -- Is it possible to get Mini Maxwell in a rack mount?

Is Mini Maxwell available in a 1RU rack mount form?  Yes.

Read more ...

Maxwell Pro -- What monitors work with Maxwell Pro?

Answer:  (by ChrisW)  InterWorking Labs has tested Maxwell Pro with these monitors:

Samsung T240HD

We believe any monitor manufactured within the last five years should work fine. Older monitors will not work.

Maxwell Pro -- Filtering traffic by ports?

Question:  (by DKorzuchin)  Hi, I have a question about the Maxwell pro--I want to filter traffic based not only on the ip address of two machines but particular ports as well. Is this possible?

Answer:  (by JimLogajan)  You can create a filter matching one, two, or a range of port numbers (where the range is set using a bit mask) by setting parameters in the "IPv4 and LAN Filter" or "IPv6 and LAN Filter" dialogs available under the "Flow Selector" dialog.

Alternately, you can use the "Match Expression" dialog to filter on up to four source or destination ports by setting the appropriate offset into the TCP or UDP headers and setting the desired match values.

If that isn't enough, you can create up to 64 flows (click on the root "System" node in the selection tree in the left panel of the main GUI and set the "Number of Flows" field,) with each flow set up with a subset of the desired ports using one of the previous mechanisms. This would provide up to 256 ports to be selectively chosen for the desired impairments. This approach is somewhat tedious to set up from the GUI but generally much easier using a script that takes advantage of the Remote API.

Maxwell Pro -- Packet drop after a sequence of packets?

Question:  (by Choco)  I want to customize my emulated burst traffic by regularly dropping packets for every burst. How do I tell maxwell, for example, that I want it to drop x packets every 10 packets

Answer: (by Jim Logajan) You can perform periodic impairments like that using Maxwell's "Flow Triggering" mechanism. It allows you to define a flow that impairs every nth packet that matches the flow. For example, to drop X packets after every 10 packets, you would first select the "Flow Selector" branch for a flow (such as Flow 0). Then place a check mark in one of the options that are labeled "LAN Filter", such as "IPv4 and LAN Filter".

The "Flow Triggering" button should now be active - clicking it will display the flow triggering dialog.

Select either "Basic" or "TcpUdp" trigger type. The GUI contains an explanation of their meanings; if in doubt select "Basic".

Since you want to drop every 10 packets you would enter 10 into the "Number of Packets in Initial Lull" field. Then enter your X value into the "Number of Packets to Impair" field. If you want the drops to go on forever then leave the "Number of Trigger Repetitions" field at zero.

Now click the "Apply" button to start the filtering. But wait - we haven't actually told Maxwell that we want packets in Flow 0 to be dropped! Click on flow zero's "Drop" branch. Check mark the "Activate Packet Drop" field and enter 100% into the "Drop Probability" field. Maxwell will now begin dropping X packets after every 10 packets.

If you need to narrow the type of packets that are included in the drops you would need to supply additional filter criteria in the "Flow Selector" dialog for that flow.

Maxwell Pro -- Testing the TCP Push Bit

Question:  (by Admin for Choco)  I would like to test that the sending side sends a PUSH bit in the request packet. Is there a way I can do that without purchasing your TCP test suite (we don't have the budget right now)?

Note that RFC 1122 states:

A TCP MAY implement PUSH flags on SEND calls.  If PUSH flags
are not implemented, then the sending TCP:
(1) must not buffer data indefinitely, and
(2) MUST set the PSH bit in the last buffered segment (i.e., when there is no more queued data to be sent).

Just trying to run some experiments on that.

Answer:  (by Admin)  Hi Choco, One thing you could do is play around with the ALTER impairment. You can modify a specific packet right from the UI. I've been using it for various packet manipulations.

Also, I've got the Maxwell TCP Test package and there is a test in there to do what you want. But you can do it with the ALTER impairment.

SilverCreek -- Trap Interval for Net-SNMP (a little off the SilverCreek topic)

Question:  (by Tiny Timmy)  I've been working with the link-up/link-down traps in Net-SNMP, and notice that they only update every 60 seconds.

Supposedly you can overwrite this in the configuration file, but when I have tried to make it every two seconds, it just goes back to 60 seconds. In other words, the settings are not honored.

Has any one figured out a way around this?

Answer One:  (by PumpkinJason)  Are you saying the traps are delayed while being sent out by your snmpd daemon, or you actually meant the traps are just logged with a delay by snmptrapd?

What configuration directive you used to make it to "2 seconds" interval?

Answer Two:  (by TinyTimmy)  The problem with net-snmp is that it checks the link status only once every 60 seconds.

That means that if you yank the network cable out you will wait anywhere between 0 and 59.999 seconds before the net-snmp agent emits a link down trap.

Or, if the link is already down and you insert a network cable you wait From 0 to 59.999 seconds before the agent emits a link-up trap.  And worse, if you manage to yank the cable and then put it back again so that both of those events occur between the checks by the net-snmp agent, *no* link-down or link-up trap will be emitted at all - only silence.

There are directives in net-snmp to alter that polling period.  But they don't work; they do not change the one minute cycle.

This issue has absolutely nothing about a trap receiver - I watched this Emission (or non-emission) of traps using Wireshark.

Answer Three:  (by PumpkinJason) I suspect this has something to do with DNS resolution time? 

If you are using host domain name for trapsink insnmpd.conf, changeit to IP address to see if it makes any difference?

Answer Four:  (by TinyTimmy)  I have been using the IP address and the DNS is resolving in a couple of milliseconds, so that is not it.

Answer Five:  (by Sandiyago) For the default linkUpDownNotification trap, current Net-snmp implementation polls for link state change every 60 seconds, that likely explains what you observed. An event based mechanism is supposed to appear in a future release.

Answer Six:  (by Sandiyago)  more reading suggests by using an agent compiled with Event MIB support we may be able to control the polling FREQUENCY:

DisMan Event MIB:
linkUpDownNotifications yes
will configure the Event MIB tables to monitor the ifTable for network interfaces being taken up or down,
and triggering a linkUp or linkDown notification as appropriate.

This is exactly equivalent to the configuration: notificationEvent linkUpTrap linkUp ifIndex ifAdminStatus ifOperStatus notificationEvent linkDownTrap linkDown ifIndex ifAdminStatus ifOperStatus
monitor -r 60 -e linkUpTrap ""Generate linkUp"" ifOperStatus != 2 monitor -r 60 -e linkDownTrap ""Generate linkDown"" ifOperStatus == 2

It is likely using a different -r x may do the trick. But it may not if the linkDown/linkUp traps are hard-coded to a minimum 60 seconds to avoid the overhead of frequent polling.

Answer Seven:  (by TinyTimmy)  I was already using the ""monitor"" lines below with -r value of 2 (two seconds).


However, what is useful is to learn that despite what the documentation says in net-snmp about the -r flag controlling the testing interval the fact is that the testing interval is "hard-coded to a minimum 60 seconds".

That means that if one is using net-snmp that the link-up/down traps are really only useful if one doesn't care about things that could occur in less than a minute.

If one is in a security center this means that a bad person could insert a man-in-the-middle box and if the cables are switched quickly that nobody would notice that link-state went down and then came back.

And if one is in a NOC one could miss transient link drops/recovery(indicative of an error condition) unless that condition just happened to occur at the instant net-snmp made its once a minute test.


SilverCreek -- Connecting remote CLI from SilverCreek? Is it possible?

Question:  (by Syed)  Hi All, I am trying to issue " show ip route " to the CISCO or any devices cli prompt.  

Is there any method or API available in silvercreek to send CLI commands using tcl script?  

Please help me if any way to access remote CLI using ip address?


Answer One:  (by Sandiyago) Since SilverCreek is a Tcl interpreter so I believe you can use whatever method (telnet, ssh ) available in Tcl language to connect to the remote server. For example, Telnet


You can consider using expect ( to automate login that requires password, or set up public key authentication to avoid entering password (ssh)..

Answer Two:  (by Syed)  Whether the agent ip and telnet/ssh ip address should be same?

Answer Three: (by Sandiyago)  It may or may not be the same depending on the target that you are trying to log in, right? The IP address of telnet/ssh used in the script should be the same as that you would type in a shell.

Answer Four:  (by Syed)  When i try to execute ssh to the ip where the agent is running , i faced below error.

Please help me how to use "spawn" command?

[ERROR] Remarks: An error occurred during execution 
invalid command name "spawn" 
while executing 
"spawn ssh root@$ip"
(file"./testsuite/private-tests/chTest/testSNMP-TE-TNNL-PE-01.tcl"" line 7)
invoked from within 
"source $testfile"
Answer Five: (by Syed)  I need to execute the expect and spawn commands in silver creek?



set ip
set user "root"
set pwd "Welcome123"

spawn ssh root@$ip
expect {
-re ".*Are.*.*yes.*no.*" {
send "yes\r"
expect "password:"
send "$pwd\r"
expect "#"
"password:" {
send "$pwd\r"
expect "#"

Answer Six:  (by Sandiyago) Spawn is form Expect package, did you install Expect package?

If you are using SilverCreek's Tcl, place the Expect folder under the 'lib' directory. You can see other libraries like "snmptcl", "tkTable" there etc. Alternatively you can place Expect anywhere but update Tcl's "auto_path" so it can be found.

If you are using your own Tcl, then install it as Expect's installation instruction.  

The best approach is to use a standard Tcl, install Expect, and make sure it works then place it in your SilverCreek test.

Answer Seven:  (by Syed)  I have installed expect package in below path:



From tclsh ,

% puts $auto_path
C:/Tcl/lib/tcl8.5 C:/Tcl/lib c:/tcl/lib/teapot/package/win32-ix86/lib
c:/tcl/lib/teapot/package/tcl/lib example/example/

From Silver creek console , 

Console display active (Tcl8.4.14 / Tk8.4.14)
(SilverCreekMx) 49 % puts $auto_path
{C:/Program Files (x86)/InterWorkingLabs/SilverCreekMx/lib/tcl8.4} 
Files (x86)/InterWorkingLabs/SilverCreekMx/bin} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/autoint} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/BWidget-1.4.1} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/compat} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/gdi}{C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/guilib} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/hdc} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/mclistbox-1.02} {C:/Program 
Files (x86)/InterWorkingLabs/SilverCreekMx/lib/printer} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/scdb} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/scgui} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/scotty} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/sctest} {C:/Program Files 
(86)/InterWorkingLabs/SilverCreekMx/lib/snmptcl} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/snmptest} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/sscgui} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/tablelist} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/TCL8.3} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/tcl8.4} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/timer} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/TK8.3} {C:/Program Files
(x86)/InterWorkingLabs/SilverCreekMx/lib/tk8.4} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/tkTable} {C:/Program Files 
(x86)/InterWorkingLabs/SilverCreekMx/lib/tools} {C:/Program Files

How i need to add the expect path in to silvercreek auto_path???

Answer Eight:  (by Sandiyago)  In your script ./testsuite/private-tests/chTest/testSNMP-TE-TNNL-PE-01.tcl, try:


if {[lsearch $::auto_path
""C:/Tcl/lib/teapot/package/win32-ix86/lib"] == -
1} {
lappend ::auto_path "C:/Tcl/lib/teapot/package/win32-ix86/lib"
package require Expect

On Linux, try

if {[lsearch $::auto_path ""C:/usr/lib"] == -1} {
lappend ::auto_path "C:/usr/lib"
package require Expect

It should also work if you use "C:/Tcl/lib/teapot/package/win32-ix86/lib/Expect5.43.2", but I don't think that's necessary.

SilverCreek -- How To change the default log path while executing automated testcases in SC?

Question:  (by Syed)  When i executed the testcases , it shows that the log is collected at the below location,

C:/Users/sambalam/AppData/Local/Temp/sctmp-5632-test_SNMP-SLSP-PE- 01-32.log

But is it possible to change the default path?

Thanks in Advance.


Answer One:  (by Samwa)  go to Tools->Options->Misc: Logging tests results to....

Answer Two:  (by JimLogajan)  Here are two ways to change the default path (the following is based on the operation of the latest release of SilverCreek):

1) Use the GUI menu selection "Tools" -> "Options" to display the Options dialog window. In that window select the "Misc" tab. Near the bottom should be an entry that allows you to select a folder path where the log files are created. Set it to the desired path and click "OK".

2) Set the user environment variable TMP or TEMP so that one of them contains the desired path. The environment variable value will only be used if the GUI options mechanism doesn't set the path.

Setting environment variables on MS Windows varies depending on the version; check Microsoft documentation for guidance. For example, the way such variables are assigned values when using Windows XP is outlined here:

SilverCreek -- Cannot get FlexLM license to work on my new laptop!

Question:  (by TinyTimmy)  I have a new Fedora laptop and when I was installing SilverCreek, my FlexLM hostid is blank. I've done this before and it has never been blank. What do I do?

Answer One:  (by MrFixIt)  Lots of people running into this. FlexLM looks for the MAC address on eth0 and nowhere else! Eth0 is usually the first network interface, but newer versions of Linux do things differently. If you go to the directory where the devices are mapped -- could be /etc/config or /etc/iftab or /etc/udev and rename eth1 to eth0, then it should work.

Answer Two:  (by Jim Logajan)  The following is probably the cause:

Flexlm goes looking for the MAC address on the Ethernet interface named "eth0" and if it doesn't find it, it returns an empty or blank hostid. Most likely your only Ethernet interface was assigned the name "eth1" rather than "eth0".

One way you may be able to rename the interface from "eth1" to "eth0" is to use the Linux "Network" Administration GUI tool (you'll need the root password). In the window that appears you would select on the "Hardware" tab to display a dialog that shows all the interfaces. Select the row with the eth1 device and click on the Edit button. In the dialog box that appears you should see a field that allows you to rename interface to eth0. Just follow through on OKing and saving as indicated. A reboot may not be needed, but probably should be done to insure the change is persistent.

SilverCreek -- Is there a provision to reboot an agent automatically?

Question:  (by Maverick)  If YES, then please suggest how it is done? Answer One:  (by Sandiyago)  This depends on whether the device has implemented a MIB that contains a writable object you can set to cause the agent to reboot. For example, cable module must have DOCS-CABLE-DEVICE-MIB implemented. The DOCS-
CABLE-DEVICE-MIB defines an object:
docsDevResetNow OBJECT-TYPE
SYNTAX TruthValue
MAX-ACCESS read-write
STATUS current
?Setting this object to true(1) causes the device to reset.
Reading this object always returns false(2).?
::= { docsDevBase 3 }

So in SilverCreek, you can do a simple SET to reboot the agent automatically snmptcl::snmpset DOCS-CABLE-DEVICE-MIB:docsDevResetNow.0 1

Answer Two:  (by Maverick)  Thanks!  

But can I add the same in the Test-> Advanced Configurations: SNMPv3-USM-MIB-> Reboot

i.e. snmptcl::snmpset DOCS-CABLE-DEVICE-MIB:docsDevResetNow.0 1
or just the integer value '1'?

Answer Three:  (by PumpkinJason)  yes, I used 
snmptcl::snmpset DOCS-CABLE-DEVICE-MIB:docsDevResetNow.0 1

SilverCreek -- MIB-II Tests

Question:  (by Bob Ciulla) Hi,I'm trying to test two SNMP-MIBII attributes from RFC-3418:
snmpSilentDrops and snmpProxyDrops.

I'm using SilverCreek M3 Build 2011.111228 and while the documentation states it supports testing RFC 3428; I do not see specific test cases for the attributes above.

Does anyone know if there are test cases written for these MIB instances?
Thanks in Advance

Answer:  (by PumpkinJason)  SilverCreek test2.2.5.2 and test3.2.5.2 check snmpSilentDrops counter. As for snmpProxyDrops counter it is impossible for SilverCreek to trigger it since SilverCreek has no idea if there is a proxy, and when/how the agent under test will deem "the transmission of the (possibly translated) message to a proxy target failed in a manner (other than a time-out) such that no Response Class PDU (such as a Response-PDU) could be returned."

SilverCreek -- How to see the executed script log?

Question:  (by Syed)  When i executed the testcases , it shows that the log is collected at the below location,

C:/Users/sambalam/AppData/Local/Temp/sctmp-5632-test_SNMP-SLSP-PE- 01-32.log

but when i tried to enter in the path "C:/Users/sambalam/", i dint see any folder named "AppData"

please help me see the collected logs...

Thanks in Advance..



Answer:  (by JimLogajan)  On Windows 7, not all folders are made visible by default, which may explain your problem. A bit of web searching yielded the following web site which contains some information on how to change the folder options so that hidden folders will appear: missing-users-folder-explorer.html

Hopefully that will guide you?

SilverCreek -- How to get the argument values when the agent responded properly to the operation?

Question:  (by Syed)  Hi all,  

SNMP Request Commands Returns: 0 If the agent responded properly to the operation , 1 If the agent did not respond properly to the operation. How to confirm this action?

Will it be possible to print that value?

For ex:

snmptcl::snmpset mplsTunnelRowStatus.$mplsTunnelIndex 7

Here i am setting the invalid rowstatus value=7 , agent should return 1 due to error.  I want that value to be printed? Is it possible?

Answer One:  (by PumpkinJason)  You would want to use the option 'rvbinds' as follows:

(SilverCreekMx) 19 % snmptcl::snmpset sysDescr.0 ""Asdad""
-rvbinds rv
(SilverCreekMx) 20 % puts $rv
{ SNMPv2-TC:DisplayString Asdad}

if the set fails, you can also get the returned error status in '-restatus' option

(SilverCreekMx) 21 % snmptcl::snmpset sysDescr.0 "Asdad" -rvbinds
rv -
restatus re 
(SilverCreekMx) 22 % puts $re 

Answer Two:  (by Syed)  Here $rv is printing entire OBJECT ({ SNMPv2-TC:DisplayString Asdad) but i need to catch "1" ? 

please let me know if any other option is available?

Answer Three:  (by PumpkinJason)  couldn't you just do

set r [snmptcl::snmpset sysDescr.0 ""Asdad"" -rvbinds rv]
puts $r.

SilverCreek -- How to save the output of a snmptcl::snmpgetcmd to a variable for further usage?

Question:  (by Syed)  My requirement is to get the below index value and need to save in the variable But when i did through SC GUI , it gave the below snippet.

snmptcl::snmpget -vbinds mplsTunnelIndexNextIndex.1 
-rvbinds res -restatus estatus -expectstatus noError \
-expectvalue 2 -reindex eindex -octetstringformat 1 -comments comm\
-context $snmptcl(agent)

but when i try to edit the above script (shown below) and re-run it failed. Also help me to execute in CONSOLE too...
#get mplsTunnelIndexNextIndex.1

set mplsTunnelIndex [snmptcl::snmpget -vbinds
mplsTunnelIndexNextIndex.1 \
-rvbinds res -restatus estatus -expectstatus noError \
-reindex eindex -octetstringformat 1 -comments comm\
-context $snmptcl(agent)]
puts $mplsTunnelIndex
if { $comm != "" } { append results ""\nCOMMENTS: GET on mplsTunnelIndexNextIndex.1; 
Expected value: $mplsTunnelIndex \n$comm" set type "failed" } else { append results "\n \nGET on mplsTunnelIndexNextIndex.1; Expected value: $mplsTunnelIndex \n " set type "passed"
} ::snmptools::scriptcmdtool::writeMessage $type $results set results

Answer One:  (by Sandiyago)  When you generate a command using SC GUI, the generated command always uses the default agent context "handle", i.e., the currently connected agent: $snmptcl(agent)


set mplsTunnelIndex [snmptcl::snmpget -vbinds
mplsTunnelIndexNextIndex.1 \
-rvbinds res -restatus estatus -expectstatus noError \
-reindex eindex -octetstringformat 1 -comments comm\
-context $snmptcl(agent)]
>puts $mplsTunnelIndex

This won't work. The returned value is only a status indicating whether the request succeeded or failed. It is not the value of the variable.

The value is contained in the variable 'res' given in the option ""-rvbinds res"". It is a list.

puts $res

I read SC developer guide, it is often better to just write your own command. In this case I think you can simply do

snmptcl::snmpget mplsTunnelIndexNextIndex.1 value
-context $snmptcl(agent) puts $value

Answer Two:  (by Syed)  Thanks a lot. But still i have the below queries:

1. Similar to snmpget , please let me know how to perform "snmpset"

snmptcl::snmpset mplsTunnelRowstatus.1 5 -context $snmptcl(agent)
puts $value

2. When i try to use the SC console , it is not showing for "snmpget", "snmpset".

Please let me know how the below command should be used in console.

Answer Three:  (by Sandiyago)  

1. snmptcl::snmpset mplsTunnelRowstatus.1 5 -context $snmptcl(agent)

That should work. It should set mplsTunnelRowstatus.1 to value 5. But in order to create a row, you usually need to set other columnar objects in the same 'set' operation.

2. didn't see the command you mentioned in your second query so can't help..

Answer Four:  (by Syed)  When i tried "snmptcl" or "snmpget" help in the SC console , i dint get any help commands related to it.

(SilverCreekMx) 46 % help snmpget
This command is not supported by the help utility. Try "snmpget /?".
SilverCreekMx) 47 % snmpget ?
wrong # args: should be "snmpget test oidlist estat eind vbs args"
(SilverCreekMx) 48 % snmpget /?
wrong # args: should be "snmpget test oidlist estat eind vbs args"
(SilverCreekMx) 49 % snmpget help
wrong # args: should be "snmpget test oidlist estat eind vbs args"

How to see the usage of commands?"

Answer Five:  (by Sandiyago)  The command reference is in SilverCreek developer guide. I found it is very very useful since it has lots of examples.

SilverCreek -- How to send a snmp get command?

Question: (by Sandiyago)  Hi, I need to send SNMP get commands in one of our automated test scripts. Can anyone quickly tell me how can I do that?

We have been using Net-SNMP command line utilities but would like to use SilverCreek's Tcl command since our test scripts are all written in Tcl. It is ""painful"" to call SNMP shell commands in our Tcl scripts.

Thanks in advance!"

Answer One: (by PumpkinJason)  The following simple commands show you how this can be done:

set ctx [snmptcl::context::create -address \
-version SNMPv1 -rcomm public ]
snmptcl::snmpget sysDescr.0 value -context $ctx
puts $value

Hope this helps!

Answer Two:  (by Syed)

1)  Actually, what the below step will do?

set ctx [snmptcl::context::create -address \
-version SNMPv1 -rcomm public ]

2) My requirement is to get the below index value and need to save in the variable But when i did through SC GUI , it gave the below snippet.

snmptcl::snmpget -vbinds mplsTunnelIndexNextIndex.1 \
-rvbinds res -restatus estatus -expectstatus noError \
-expectvalue 2 -reindex eindex -octetstringformat 1 -comments comm\
-context $snmptcl(agent)

but when i try to edit the above script(shown below) and re-run it failed. Also help me to execute in CONSOLE too...

#get mplsTunnelIndexNextIndex.1
set mplsTunnelIndex [snmptcl::snmpget -vbinds mplsTunnelIndexNextIndex.1 \
-rvbinds res -restatus estatus -expectstatus noError \
-reindex eindex -octetstringformat 1 -comments comm\
-context $snmptcl(agent)]
puts $mplsTunnelIndex

if { $comm != """" } { 
append results "\nCOMMENTS: GET on mplsTunnelIndexNextIndex.1; 
Expected value: $mplsTunnelIndex \n$comm"
set type "failed"
} else { 
append results "\n \nGET on mplsTunnelIndexNextIndex.1; Expected value: 
$mplsTunnelIndex \n "
set type "passed"

::snmptools::scriptcmdtool::writeMessage $type $results
set results""

Answer Three:  (by Sandiyago)  

"set ctx [snmptcl::context::create -address \
-version SNMPv1 -rcomm public ]

This creates an agent context "handle". The context handle "encapsulate" agent IP address, community string etc.

Then later you can use the handle like

snmptcl::snmpget sysDescr.0 value -context $ctx

Answer Four:  (by Sandiyago)  When you generate a command using SC GUI, the generated command always uses the default agent context "handle", i.e., the currently connected agent: $snmptcl(agent)

SilverCreek -- Slow down testing?

Question:  (by PumpkinJason)  Hi All, I have a very "slow" agent--apparently it can't keep up with the test packets sent by SilverCreek. Can I slow down the speed at which the test packets are sent? Is this even possible?  Thanks!

Answer One:  (by Sandiyago)   Yes you can do that. You Can Test->Fine-tune Testing Options: "Insert a delay between sending test packets...". We once had the same problem. But since our agent has been optimized so that is no longer an issue for us.

Hope this helps!

Answer Two:  (by PumpkinJason)  Thanks!

SilverCreek -- Disable sysUpTime checking?

Question:  (by Sandiyago)  Hi, All my tests failed in checking sysUpTime value since our agent did not implement sysUpTime object. Well this makes it impossible to run any meaningful tests? Is there a way I can tell SilverCreek not to check sysUpTime while running tests?  Thanks!

Answer One:  (by illnifan)  Select menu Tools->Options, click on "Test" Tab, and uncheck the check-box "Check System Reboot during each test ....". This should do what you want.

Answer Two:  (by PumpkinJason)  That worked for me! Thanks!

Answer Three:  (by Maverick)  Thanks for the solution, I was also struggling with this. It is very useful.

SilverCreek -- Testing aborted unexpectedly after a few failures?

Question:  (by Sandiyago)  Hi All,  I am new to SilverCreek. I noticed that, when I run tests, they always exit after a certain number of failures have been detected. How do I force a test not to exit until it finishes? I want to get all failures/errors report in a single test run.  Thanks in advance!

Answer One:  (by PumpkinJason)  You need to configure Test->Fine-tune Testing Options: Number of Error allowed before applicable tests stop... The default is 10, set it to 0.

Silvercreek -- Change SNMP Timeout Setting

Question:  (by Sandiyago)  I feel this should be a simple question. But I am new to SilverCreek so... Can somebody quickly tell me how can I change the timeout settings?  Thanks a lot!

Answer One:  (by PumpkinJason)  You can change timeout setting when you connect to your device. There is an optional option "timeout and retry settings".  I think that is what you are looking for.  Cheers

Answer Two:  (by Sandiyago)  yes. that's what I was looking for...Thanks!

SilverCreek -- Execute a tcl script from DOS window?

Question:  (by PumpkinJason) Hi all, Can anyone quickly tell me how to run a Tcl script from the DOS command line? I have batch_run_silvecreek_tests.tcl script from a colleague and I need to run it, so now how should I do it?  Thanks in Advance.

Answer:  (by illnifan)  tclsh84.exe your_script.tcl

Hope this helps!

SilverCreek -- SNMP Load Testing

Question:  (by PumpkinJason)  Is there a simple a way for me to easily walk agent and again and again?

Answer One: (by illnifan)  I know an easy (and recommended) way is to repeat running our prepackaged tests. For example, test1.1.2 walks the agent under test. To repeatedly walk the agent, you can use menu 'Test->repeat'.

From Tcl command line, you can also easily run tests using SilverCreek's Tcl command line interface. To simulate multiple managers, you may want to simultaneously starting multiple instances of those batch testing scripts in multiple Tcl interpreters.

Answer Two:  (by PumpkinJason) Perfect!

Top Five Failure Predictions for 2013

InterWorking Labs is saving the world from network failures. As a result, we pay attention to failures that are imminent. Here are our predictions for 2013:

1A major stock exchange will have a catastrophic network failure that cascades through other exchanges and forces trading to halt in world markets. Trading will be halted for more than a day, possibly much longer. The failure will have multiple causes including undiscovered flaws corrupting trades in unanticipated ways. For example, no owners or multiple owners after a trade. The cleanup could be extraordinary, even impossible, as there would be no equitable or principled way to decide who wins and who loses. It could take weeks to sort through all the transactions and make sense of them again. Losses will be in the billions of dollars.

2Social networking sites will face new lawsuits, not based on privacy issues (as many think), but based on the publication of false and misleading data. It will be called "drive by defamation". The source of this incorrect data will turn out to be hackers and pranksters who easily defeated the weak security of the social networking sites. Rather than address their network security issues, most social networking sites will instead vigorously fight the lawsuits. Internet security experts will step forward and demonstrate the security flaws of the sites. By the end of 2013, the only remaining social networking sites will be the ones with very strong network security and authentication.

3After a dozen or so cyber attacks against US military intelligence during 2013, the US Department of Defense will understand that it must shift priorities to cyber security instead of military bases, gear, and troops. However, it will be unable to do so because of the budget cycle. The cyber attacks will continue and escalate. They will be covered up in the interests of "national security".

4By the end of 2013, the debate about a nationwide fiber infrastructure will begin. (What would it mean if all Americans had fiber to the home, giving them 24x7 access to the Internet with 100 Mbps links -- several times their current speed?). The debate will be quickly shut down by the carriers and the FCC in the name of the "security and stability of the Internet".

5Because of the immense pressure to "engage with social media", everyone with a Twitter account will follow everyone else with a Twitter account (all 500 million of them). This will radically alter the culture of Twitter so that the popularity contest of who has the most followers becomes irrelevant. Twitter management will hold a lot of off-site meetings to rethink the strategy. (Okay ... we are joking.)

Chris reviews: pwSAFE Password Manager

The number one thing I look for in software is a great design.  I like to see an architecture based on every conceivable usage scenario.  A good design encompassing all usage scenarios means that you don't have awkward navigation and clumbsy workarounds when important capabilities are requested by users after the initial product release.  

I was not actually thinking about this very much when I went off looking for a better password manager than the clumsy text file I was using on a USB memory stick.  The USB solution was good in that you could open the file and copy and paste logins and passwords, thereby defeating keystroke loggers and recorders.  

But, the iPhone and the iPad do not have USB connectors (well... without going through an adapter that may or may not be available). So that was not very convenient.

So I tried out pwSafe, from a company called App77 -- 

At first, I did not get it at all; I was completely bewildered.  There's a little user guide included, but I did not understand how I should set things up and why I should set them up that way.  A couple of emails to tech support and I learned I could create a "Safe" and inside that Safe I could define "Groups" and inside those Groups I could define an entry with a login, password, url, email address, and extra notes and other details.  

For example, you can have a Group called "Personal Info" and inside that group have all the login/passwords for various personal services, like your health care provider's website, or your personal Twitter information.  Amazingly, with one push of a button, all of that could open up a website and log you into a website -- you don't have both typing or copying and pasting the login, the password, etc.   If you want to share login/password information with others, such as a spouse, then you can define another Safe or multiple Safes,and provide the proper credentials to your spouse to gain access.

There's also multiple backup possibilities that are supported.

There is also an open source version of this at

But for $1.99, why would I bother with an open source solution?  

I would have to say that much as I like "Band of the Day" and "Lose It", this is now my favorite and most valuable app on my iProducts!

Top Ten Reasons InterWorking Labs is better than Apple Computer

#1 When a customer calls us about a repair, and he did his own diagnosis of the problem, we pay attention and give consideration to his findings.

#2 When we replace a defective unit, we do not provide a replacement unit with an obsolete version of the operating system.  We upgrade the replacement unit to the latest version of our software.

#3 When we replace a defective unit, we do not provide a replacement with an EARLIER serial number, meaning a product OLDER than the customer's product that needs repair.  Instead we provide an equivalent or later product.

#4 Most Apple employees could never get a job at InterWorking Labs, because our standards of technical competency are higher:

Apple Genius Bar Worker = Below Average InterWorking Labs employee

#5 Our employees do not use cutesy names for standard products; they never call a USB connector a "camera kit".

#6 Our employees can identify a USB connector.

#7 Our products covered by warranty or a service agreement are fixed or replaced and returned to the customer without editorial comments. We do not force our customers to sign a document stating the obvious: that if they were not under warranty, the repair would cost money. We believe they know this and that is why they bought the warranty or service agreement.

#8 When our customers contact us with a problem, we do not tell them that they can only communicate with us in a "positive" manner.  We do not dictate the emotional tone of our customers' communications.

#9 When our customers return a product for repair, we do not examine all the connector openings with an otoscope to see if there was water damage.  We do not believe our customers would use a network emulator or protocol tester in the shower or the bath tub.

#10 When a customer tells us that he needs to speak to someone with deeper technical knowledge to answer his question, we find the appropriate staff engineer and arrange the communication.  We do not take our lack of deep technical knowledge personally and we do not tell the customer "you are not a very nice person".

In spite of all of the above, I continue to use my iPad.

Fedora 16 NFS issue

Here at IWL we use Fedora Linux rather heavily.

Recently we noticed a problem with Fedora 16 that caused us a lot of trouble.  So we though it might be a good idea to make a public note of what happened and how we solved it.

The system in question is Fedora 16. Generally we use the 64-bit versions, but we would not think that this issue is any different on 32-bit platforms.

We use NFS file servers, such as the Netgear ReadyNAS units.

When our systems boot up the /etc/fstab file contains several entries designating file systems to be automatically NFS mounted.

In addition we use the autofs (automounter) to mount user home directories when users log in.  These directories are mounted under /home.  However the full set of user directories under one of the mountpoints that was mounted via fstab, as mentioned above.

What started to happen recently is this: Our set of startup mounts was not being mounted, but the user login mounts via autofs were working.  (We could however, manually mount the items in /etc/fstab by "mount -a -t nfs").  The system log file - /var/log/messages - was showing NFS timeout errors, which usually recovered after a while, and failures of the NFS lock daemon, lockd.

As side effects of this some applications such as Firefox, Thunderbird, and sometimes Google Chrome would fail to start (and for Firefox and Thunderbird leaving a .parentlock file in their respective hidden directory hierarchies that prevented subsequent launches.)  Many other applications, both KDE and GNOME, did work, however.  Our guess is that there was some dependency on NFS lock semantics of the particular programs.

This started happening quite recently - with Fedora 16 updates occurring sometime in the first 9 days of May, 2012.

We found a work-around, which is to create user home directories on the local hard drive (and modify our /etc/auto.home files so that those would show up as /home/username.)

That seemed to cure the NFS timeouts, although it still left the automatic mounts via /etc/fstab inoperative (but they could still be mounted manually.)

Firefox, Thunderbird, and Chrome all work smoothly again.

And we could reach our old home directories via NFS without encountering any NFS errors or timeouts.

In our search for solutions we noticed a mention that there are issues in the systemd (systemctl) system that may be bring up parts of the system too quickly so that some networking dependencies were not actually satisfied.

In any event we hope that this helps people who encounter this situation.

Apache With RADIUS - Two or More RADIUS Servers






March 16, 2012

We recently added support for RADIUS to Mini Maxwell. This allows Mini Maxwell to be controlled by HTTPS.

We first used the relatively well known mod_auth_radius module for the Apache web server.

However we hit a snag - mod_auth_radius can handle only one RADIUS server.  It has no way to define a fallback RADIUS server that will be used if the primary one is non-responsive.

We found an alternative - mod_auth_xradius.

However, the current version, v.0.4.6 is fairly old and needs some patches to give it the ability to accommodate multiple RADIUS servers.

We found some useful material at  However the patch shown there had some white-space issues which caused the patch process to fail.

So below is a version of the patch that we use - it is essentially identical to the original patch but with clean white-space.

  1. Pull the patch shown below into a file, let's call it patch-file.txt
  2. Fetch the distribution file:
  3. Unpack it:
       tar xjf mod_auth_xradius-0.4.6.tar.bz2
  4. Go into the top level directory:
       cd mod_auth_xradius-0.4.6
  5. Apply the patch:
       patch -p0 patchfile.txt
  6. You should get a success message that may look like this:
       patching file src/mod_auth_xradius.c
  7. Now you need to build the module and install it using the instructions shown at
  8. We've included a chunk of our Apache configuration file to show how we configure this module.
    Note the AuthBasicProvider xradius line.
--- src/mod_auth_xradius.c.orig	2012-03-15 14:19:25.000000000 -0700
+++ src/mod_auth_xradius.c	2012-03-15 14:23:20.000000000 -0700
@@ -125,15 +125,15 @@
     rctx = xrad_auth_open();
     /* Loop through the array of RADIUS Servers, adding them to the rctx object */
-    sr = (xrad_server_info *) dc->servers->elts;
     for (i = 0; i < dc->servers->nelts; ++i) {        
-        rc = xrad_add_server(rctx, sr[i].hostname, sr[i].port, sr[i].secret,
+        sr = &(((xrad_server_info*)dc->servers->elts)[i]);
+        rc = xrad_add_server(rctx, sr->hostname, sr->port, sr->secret,
                              dc->timeout, dc->maxtries);
         if (rc != 0) {
             ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r,
                           "xradius: Failed to add server '%s:%d': (%d) %s",
-                          sr[i].hostname, sr[i].port, rc, xrad_strerror(rctx));
+                          sr->hostname, sr->port, rc, xrad_strerror(rctx));
             goto run_cleanup;
@@ -294,7 +294,7 @@
     /* To properly use the Pools, this array is allocated from the here, instead of
         inside the directory configuration creation function. */
     if (dc->servers == NULL) {
-        dc->servers = apr_array_make(parms->pool, 4, sizeof(xrad_server_info*));
+        dc->servers = apr_array_make(parms->pool, 4, sizeof(xrad_server_info));
     sr = apr_array_push(dc->servers);
## This Loads mod_auth_xradius into Apache
LoadModule auth_xradius_module /usr/lib/apache/
<IfModule mod_auth_xradius.c>
# AuthXRadiusCache none -
AuthXRadiusCache dbm "/var/cache/auth_xradius_cache"
AuthXRadiusCacheTimeout 300
<Location />
# See http:
AuthName "RADIUS authentication for something or other"
AuthType Basic
AuthXRadiusAddServer "" "2secrets"
AuthXRadiusAddServer "" "secret1"
AuthXRadiusTimeout 5
AuthXRadiusRetries 3
AuthBasicProvider xradius
Require valid-user

IPv6 NIDS evasion

From the SI6 Networks Blog ...

Recently, we assessed the fragmentation and reassembly policies of some popular IPv6 implementations, such that we could evaluate the feasibility of IPv6-fragmentation-based insertion/evasion attacks with current IPv6 implementations (similar to those described by Ptacek and Newsham for IPv4). The aforementioned assessment was not "casual", but was mostly motivated by recent improvements in the IPv6 fragmentation and reassembly implementations of a number of popular IPv6 stacks. The improvements mostly fall into these categories:



As one might expect, all of these aspects are intimately related, and interact with each other in most scenarios.

This article discusses the first two items: the basic fragment reassembly policy of some popular IPv6 implementations (item #1 above) and the processing of IPv6 atomic fragments (item #2 above) of such implementations. Read the Blog entry.

Maxwell Pro -- New Video - Using Maxwell To Test TCP Congestion Avoidance

Implementations of the Transmission Control Protocol, TCP, should contain code that detects and responds to internet congestion.  (See RFC 5681, September 2009.)  TCP congestion detection and avoidance code can be complex  And it is almost always undertested.

This video describes the problem and shows how Maxwell can be used to create controlled and reproducible patterns of congestion for the purpose of testing how well a TCP implementation responds to internet congestion.


Maxwell Pro, Maxwell G, & Mini Maxwell -- Bufferbloat - TCP performance issues

If you are developing or maintaining a TCP stack, or if you are tasked with performance issues for network devices, you should get informed about "Bufferbloat". Wikipedia provides this summary of the Bufferbloat problem:

The problem is that the TCP congestion avoidance algorithm relies on packet drops to determine the bandwidth available. It speeds up the data transfer until packets start to drop, then slows down the connection. Ideally it speeds up and slows down until it finds an equilibrium equal to the speed of the link. However, for this to work the packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large buffer, the packets will arrive, but with a higher latency. The packet is not dropped, so TCP does not slow down even though it really should. It does not slow down until it has sent so much beyond the capacity of the link that the buffer fills and drops packets, but this then means it has far overestimated the speed of the link.

In addition to the Wikipedia article, there is also a presentation from the Prague IETF meeting.

Finally, there is a wiki devoted to developing news.

InterWorking Labs is experimenting with Bufferbloat using Maxwell in our lab network, but so far we have been unable to reproduce it consistently.   For more information, please contact InterWorking Labs

Mini Maxwell -- Driving Mini Maxwell From a Script

April 5, 2011

We have published a small Python program that may be used to drive a Mini Maxwell directly without the need for a user to operate the Mini Maxwell web pages.

This program is an alternative to the spreadsheet based facility that has been available to script Mini Maxwell.

The spreadsheet is constrained to a repeating sequence of a baseline plus up to twelve program steps.

This new program may be modified by the user to do any number of steps and be used inside a more sophisticated test sequencing harness provided by the user.  The program exists as a shell command - so it fits nicely within any typical test language framework written using TCL, Perl, Python, etc.

It is expected that the user will make copies of this program and use those copies as templates for special-purposed versions such that each version imposes one set of impairment values into the Mini Maxwell.

(In addition, the user is expected to modify the program to inform the program whether the Mini Maxwell is Revision 12 or later.)

This program requires Python version 2.6 or later.

How to obtain: This program is available via the IWL support website. It may be found under the "Support" | "Customer Downloads" menu among the items available for "Mini Maxwell".

Maxwell Pro -- Maxwell Resequencing Plugin

April 5, 2011

A new Maxwell plugin that can do user controlled re-sequencing of packets is available as part of the latest Maxwell TCP testing package.

The plugin is able to re-arrange packets in a flow so that, for example, packets that originate in sequence A, B, C, D, E, F will arrive in the order C, B, A, D, F, E.

The plugin allows user control of the re-sequence pattern and several other parameters.

Two scenarios have been created to allow the user to launch this plugin with a mouse click:

  • A basic scenario that gives user access on the graphical user interface to all of the re-sequencing plugin controls.
  • A scenario to subject a TCP connection to test that stresses the TCP stack's congestion avoidance code.  This scenario will re-sequence TCP ACK packets while simultaneously subjecting other packets in the TCP connection to a sawtooth pattern of rising and falling delay and packet loss.

This plugin gives the user greater control over packet reordering than is possible using the standard jitter impairment.

Areas in which this plugin may prove useful:

  • Testing VoIP and IPTV devices to evaluate how well they can handle media streams with reordered media packets.
  • DNSSEC where delayed responses arrive in an order different than the query sequence.
  • DHCP client's ability to handle multiple answers.
  • etc.

The plugin works on the concept of a "group" of packets (where the group size is from 1 to 9 packets.)

For instance, the user can define a group size and specify the order in which the packets are released from the group.  Thus with a group size of 4 containing packets A, B, C, and D the user could specify that they are released in order C, D, B, A.

There is another parameter to specify a number of packet between groups.

And another parameter to specify a limit on the time to accumulate the N packets of a group - if that limit is reached the group is drained, with a flag specifying whether to apply as much of the release pattern as usable, or not.

Note: re-sequencing causes packets to be delayed as they wait for the re-sequence group to be accumulated.  Consequently, re-sequencing will override the effect of any delay or jitter added by the standard impairments on the flow being re-sequenced.

SilverCreek Technical Support Comparison


Hi Technical Support!

Could you please clarify the difference between these two situations for SNMPv3:

Case One: Send a GET request with the field contextEngineID encoded as a zero-length OCTET STRING.

Case Two: Send an unauthenticated GET request with the msgAuthoritativeEngineID, contextEngineID and msgUserName fields encoded as zero-length OCTET STRINGs.

It seems to me in both cases, empty contextEngineIDs are being sent, so the agent should respond the same way in both situations. My agent is failing Case One and passing Case Two. Could you throw some light on this for me?

Brand X Answer:

Try rebooting your agent.

InterWorking Labs Answer:

In Case One, the expected outcome is for the agent to:
(a) drop the message
(b) increment the counters:
return snmpUnknownPDUHandlers in a Report
(c) NOT increment the counters:

Case One sends a PDU with an empty contextEngineID which is different from Case Two where it sends a PDU with an empty msgAuthoritativeEngineID (and contextEngineID).

In Case Two before your code gets to process "contextEngineID", it should already detect that the test packet is an engineID discovery packet. So you should simply return unknownEngineID report and discard the test packet without further processing.

Case One refers to rfc3412, section
===== Incoming Requests and Notifications

The following procedures are followed for the dispatching of PDUs when the value of sendPduHandle is <none>, indicating this is a request or notification.

1) The combination of contextEngineID and pduType is used to determine which application has registeblack for this request or notification:

2) If no application has registeblack for the combination, then:

a) The snmpUnknownPDUHandlers counter is incremented.


This only happens when you get to process scopedPDU. That is, in a later stage than when engineID discovery occurs in Case Two.

Note, in Case Two, msgAuthoritativeEngineID in UsmSecurityParameters is empty, thus the request should be treated as an engineID discovery packet!


InterWorking Labs

Follow up Question:

Hi Technical Support!

When I added the necessary code, i am unable to detect the agent as it is reporting that it received unknown engine ID report, or sometimes not in time window error. I noticed that, during the detection process it is sending a context engine id len of zero after getting the msgContext engine ID in a report. Can you please clarify the detection process of the agent?

Brand X Answer:

Your code is not working.

InterWorking Labs Answer:

Hi Customer,

Case One sends a test packet with only the contextEngineID encoded as a zero-length OCTET STRING.  Since the correct msgAuthoritativeEngineID IS included in the test packet, the agent under test MUST NOT send
back the "unknownEngineID" report.  Because it contains an empty contextEngineID, the agent should return snmpUnknownPDUHandlers in a REPORT because obviously there are no PDU handlers (SNMP applications) registeblack for the "empty contextEngineID".

Customer writes:

I noticed that, during the detection process it is sending a context engine id length of zero after getting the msgContext engine ID in a report.

Case One issues the following packets:

#GET initial counter values
get sysUpTime.0
get snmpUnknownPDUHandlers.0
get usmStatsUnknownEngineIDs.0

#Issue the test packet
get snmpEngineBoots with an empty contextEngineID

#After sending the test packet, GET counter values again
get sysUpTime.0
get snmpUnknownPDUHandlers.0
get usmStatsUnknownEngineIDs.0


InterWorking Labs


Maxwell -- Why Your Cruise Ship Internet Is So Slow and What Can Be Done To Make It Better

Internet service from a cruise ship is often a painful experience.

Watch our new video and learn why - and how Maxwell can be used to test solutions.

Come visit the IWL channel on YouTube.