Can you provide more information?
Are you able to run the delivered examples?
Why are you running such an old version?
Here is what I learnt from your post:
You are using Windows 7
You are using the 32bit OPNET GUI version 17.1 PL1, Released Oct - 2011
You are running a 64bit optimized simulation
Up to here, everything is OK.
The software is having trouble exchanging information via the Inter Process Communication Port (IPC port). Maybe you ran out of memory?
Did you notice your simulation is still running as a standalone simulation? If you watch the Windows task manager, you will see op_runsim_opt,exe is still running.
Once the simulation completes, relaunch the OPNET GUI. The GUI may prompt you to recover the old results. Ignore these files. Your DES-log will show any logged messages and your results should appear in the statistics.
[This doesn't solve your problem, but it does provide access to your results...]
You could also try running your simulation from the command line as a standalone simulation.
Yes, you are quite right. I installed 17.1 OPNET on a 64-bit Window 7 machine and run simulations on it.
I am simulating a large network with 192 nodes and collecting lots of lower level statistics such as collision number and utilization.
I tested my code many times, and it turned out that was not my code that creates the problem. As the simulation was running, all of a sudden, it stopped. I checked the DES log and another log and found out this error.
Your diagnosis sounds about right. If I run 64 nodes, the error does not show, but I increase the number to 192, the error turns up, as well as some other errors such as program fault and invalid memory access. It seems that these errors are random and could appear at any stage of the simulation.
I will tell you what I did in the simulation and that maybe helps you with analysis.
1) I created a 6LoWPAN model by interfacing an open-zb MAC with the built-in IP model from below. (I put my own models above and below the IP model, what do you think of this ?)
2) I also created another gateway node model by combing this 6LoWPAN model with a WLAN node.
3) I collected all the statistics including lower level ones using all values mode rather than other modes.
4) I am simulating a very large network with almost 200 nodes.
5) The simulation goes very slow in spite of using the optimize mode. For example, a 2-minute run with three seeds could go up to 11 hours.
With all these information, could you give some hints on 1) how to debug the above problems and let the simulation go a lot smoother and 2) what attributes or parameters to change so that it can speed up the simulation?
Network topology and node models are attached.
I have a previous message that is still being moderated. Once they finish, you may see some interesting comments.
I took a look at your picture of node_model_1.jpg. I noticed your source and sink modules were not connected to anything. I doubt they are sending the data you intend. I got the picture below from the Lab on "Simple Queuing Network using OPNET". To find the lab, past the text into the Search the community box.
When you ran the scenario with 64 nodes, did you see the results you expected?
I would recommend reducing the size of your network until you are able to work out all of the program aborts.
Also, you indicate you are collecting lots of lower level statistics. Is this necessary? Are they being collected as all values, or in buckets?
Thanks very much for your comments on the model.
From your conjecture with respect to the source and the sink node, I can see I made a mistake since I
did point out the data source and how the data flows in the attached picture. In fact, the source and sink node are not used to generate data, and I forgot to delete them when I made it in the first place. The data is generated from the 6LoWPAN model in the first picture (node model 2) and then is forwarded to the gateway node (node model 1) in the middle of each cluster. After that, the gateway model processes the data and send it to a WLAN sink centered in the topology. All models are using wireless transceivers.
The data source of the gateway is in the first picture I attached ( see the upper layer). When I ran the model with 64 nodes, I could get all the data I intended to obtain. The reason why I chose the "all values" mode over the "bucket " mode is that I found out the bucket mode is sometimes not accurate in interpreting the results, with much larger values than I would have thought. For this reason, I used "all values" for all my statistics irrespective of the higher or lower ones.
I attached the cdb.exe ( Window debugging took kit ) to the OPNET debugger, it stopped as the simulation aborted without pinpointing any line of code whatsoever. The log gave a function in the virtual operating system, where things are getting serious, so I posted the function on the Splash.
Looking forward to hearing from you again!