[Users] Low Throughput

ticolucci ticolucci at gmail.com
Wed Mar 16 11:05:48 CET 2011


Hi Vzurczak!

Thanks for your concern! =D




vzurczak wrote:
> 
> I don't understand how it can be so slow. :? 
> Some people are using this component with quite complex BPEL processes, and messages are processed much faster.
> 

=/
Weird...
Do you know if any of those complex process are Open Source? So that I can run one of them on my environment and compare the results. =)
 


> 
> Would it be possible to precisely describe your running conditions?
> Which client, on which machine, which calls to which node, how is exposed the BPEL process, etc. 
> I think it is a small thing to change, the question being "which one?". ;) 
> 


Here are the specs in the machines on the LAN:

Code:

ticolucci at aguia1:~$ cat /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 15
model		: 4
model name	: Intel(R) Pentium(R) 4 CPU 3.00GHz
stepping	: 9
cpu MHz		: 3000.013
cache size	: 1024 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 1
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 5
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pebs bts pni monitor ds_cpl cid cx16 xtpr lahf_lm
bogomips	: 6000.02
clflush size	: 64

processor	: 1
vendor_id	: GenuineIntel
cpu family	: 15
model		: 4
model name	: Intel(R) Pentium(R) 4 CPU 3.00GHz
stepping	: 9
cpu MHz		: 3000.013
cache size	: 1024 KB
physical id	: 0
siblings	: 2
core id		: 0
cpu cores	: 1
fdiv_bug	: no
hlt_bug		: no
f00f_bug	: no
coma_bug	: no
fpu		: yes
fpu_exception	: yes
cpuid level	: 5
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc pebs bts pni monitor ds_cpl cid cx16 xtpr lahf_lm
bogomips	: 6000.04
clflush size	: 64




ticolucci at aguia1:~$ free
             total       used       free     shared    buffers     cached
Mem:       1033692    1022416      11276          0     130960     574544
-/+ buffers/cache:     316912     716780
Swap:      3028212        184    3028028




ticolucci at aguia1:~$ df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/Ubuntu-root
                      144G   64G   73G  47% /
varrun                505M  144K  505M   1% /var/run
varlock               505M     0  505M   0% /var/lock
udev                  505M   52K  505M   1% /dev
devshm                505M     0  505M   0% /dev/shm
/dev/sda1             228M   44M  173M  21% /boot
ticolucci at aguia1:~$ 







There are 8 machines, as described above, connected via Gigabit to one switch (I don't have the exact spec about the connections, but tests suggests that the avg bandwidth is around 13Mb/s).

In each machine there's an instance of petals. The topology is master-slave (aguia1 being the master and the other 7 slaves).


Even though petals is running on all the 8 machines, the process I create use only 7 of them (I install each node of the process on each machine of the LAN). 
For example:
If I create a composition with one root and two leaves; the root will be installed in aguia1, and the leaves installed in aguia2 and aguia3. Generating the following "draw":

Code:

   r
 /  \
o    o

r = root
o = leaf






Finally, aguia8 is the machine used as client. It runs JMeter to test the composition. My best throughput so far was 400 msgs/min (6.66 msgs/sec), when 20 threads sends messages to the composition with a delay of 1 second each (each thread sends a message and wait 1 second to send the next one).




Sorry by the giant post... I wanted to put every detail of my environment so that you guys could help me more easily. =)
If anything wasn't clear enough, please let me know.


Thanks again,
Thiago

PS.:  Happy new year!




-------------------- m2f --------------------

Subscribe/Unsubscribe emails notifications :
http://forum.petalslink.com/m2f_usercp.php

Response to this email will be posted on the Petals forum.
Please delete the existing text before responding :)

Read the topic online:
http://forum.petalslink.com/viewtopic.php?p=31684#31684

-------------------- m2f --------------------






More information about the Users mailing list