4

How to troubleshoot PCoIP performance

 3 years ago
source link: https://myvirtualcloud.net/how-to-troubleshoot-pcoip-performance/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

How to troubleshoot PCoIP performance

  • 03/26/2010

Recently I have been doing some performance troubleshooting on PCoIP display protocol. Well, actually not much, as PCoIP work out-of-the-box for most VMware View deployments without any intervention.

The protocol has a kind of a healing and adaptive nature. If you are not familiar with the progressive build and adaptive nature of PCoIP display protocol read this first.

Now that you are familiarized let’s see how to initiate a troubleshooting process.

A common complaint is poor performance, causing mouse lag and screen delays. Most often this type of problem is diagnosed as lack of network bandwidth for multimedia (MMR) or high display resolution with multi-monitor support. However, to find out exactly what it’s happening it’s necessary to observe the PCoIP Server log files and the network. I’ll leave the network for the networkers.

There are 2 log files that should provide enough information to start the troubleshooting. They are located in the virtual desktop at ‘c:\Documents and Settings\All Users\Application Data\VMware\VDM\logs’.

UPDATE (06/12/2010) – In Windows 7 the path has changed to ‘c:\program data\application data\VMware\VDM\logs’.

pcoip_server.txt – All transactions related to encoding, virtual channels, image management, bandwidth, etc…

pcoip_agent.txt – All client side transactions, such as connectivity, handshake etc..

In this post I’ll talk about the pcoip_server log file. The log initiate with a number of information collected from the environment. Attention to the following:

03/25/2010, 09:35:12.107> LVL:0 RC: 0 MGMT_ENV :cTERA_MGMT_CFG::Registry setting parameter pcoip.max_link_rate = 512

VMware View unsupported Admin templates for AD GPO allows you to configure the maximum bandwidth consumption per PCoIP session. Make sure this is not in use or set to a reasonable number.

The extract below demonstrate a very constrained network with PCoIP dynamically adjusting the maximum transmission to 64Kb/s. The log also provide information about the round trip time, variance, decode rate and RTO. The higher the RTO the better is the performance. The VGMAC also identifies that there is packet loss of 0.09% from the client to the virtual desktop.

14:21:00.822> LVL:2 RC: 0 MGMT_PCOIP_DATA :Tx thread info: round trip time (ms) = 6, variance = 1, rto = 107
14:21:17.321> LVL:2 RC: 0 MGMT_IMG :log: cur_s 1 max_s 30 tbl 5 bwc 0.05 bwt 0.45 fps 2.03 fl_ps 30.33
14:21:17.321> LVL:2 RC: 0 MGMT_IMG :log: chg pix: 34688, chg pix not motion: 34688
14:21:17.321> LVL:2 RC: 0 MGMT_IMG :log: delta bits encoded: 970136, delta build bits encoded: 581248.
14:21:17.321> LVL:2 RC: 0 MGMT_IMG :log: enc bits/pixel – 27.97, enc bits/sec – 32329.12, enc MPix/sec – 0.00, decode rate est (MBit/sec) – 1.51
14:21:25.946> LVL:2 RC: 0 MGMT_PCOIP_DATA :Tx thread info: bw limit = 64, plateau = 62.7, avg tx = 5.5, avg rx = 2.4 (KBytes/s)
14:21:25.946> LVL:1 RC: 0 VGMAC :Stat frms: R=000000/000000/334586 T=014734/298585/136743 (A/I/O) Loss=0.00%/0.09% (R/T)

A scenario like that could be found in a sliced network with QoS prioritising all known traffic, and leaving minimal bandwidth to be shared across all unknown traffic. The solution could be as simple as adding prioritisation to PCoIP at the QoS. However, every case is different so engage your network team.

Now, below you will see a extract with better network throughput. However, because of packet loss identified (Loss=0.45%/0.21% (R/T)) due to reduced available bandwidth at that given moment PCoIP dynamically reduced the bandwidth consumption to 176Kb/s.

14:16:17.430> LVL:2 RC: 0 MGMT_PCOIP_DATA :Tx thread info: bw limit = 234, plateau = 235.0, avg tx = 194.7, avg rx = 3.4 (KBytes/s)
14:16:17.430> LVL:1 RC: 0 VGMAC :Stat frms: R=000000/000000/027895 T=011578/041311/010192 (A/I/O)
Loss=0.45%/0.21% (R/T)
14:16:17.805> LVL:1 RC: 0 MGMT_PCOIP_DATA :BW: Decrease (loss) old = 234.9982 new = 176.8438

Using the same technique PCoIP will dynamically increase the bandwidth if it is required by the protocol and if it is available on the network.

The sequence below demonstrate that there were two decrease moments but in between those moments the bandwidth has been automatically increased. The gradual increase of bandwidth consumption is not shown in the log file.

14:16:17.805> LVL:1 RC: 0 MGMT_PCOIP_DATA :BW: Decrease (loss) old = 234.9982 new = 176.8438
14:16:18.383> LVL:2 RC: 0 MGMT_IMG :rcv nak 2 seq_id 184 disp 0 fsp 5 f_ref 33
14:16:18.383> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: NAK for fsp 5 seq 33. (ref id=82) Ack ref available, recode from input
14:16:18.415> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: IPLP: recode from input: fsp=5 refvld=1 seq=34
14:16:18.524> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: Encoder clearing recode for fsp 5 seq 34 (ref id=11)
14:16:21.305> LVL:2 RC: 0 MGMT_IMG :rcv nak 2 seq_id 222 disp 0 fsp 4 f_ref 41
14:16:21.305> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: NAK for fsp 4 seq 41. (ref id=82) Ack ref available, recode from input
14:16:21.430> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: IPLP: recode from input: fsp=4 refvld=1 seq=42
14:16:21.587> LVL:2 RC: 0 MGMT_IMG :SW_HOST_IPC: Encoder clearing recode for fsp 4 seq 42 (ref id=61)
14:16:22.524> LVL:1 RC: 0 MGMT_PCOIP_DATA :BW: Decrease (loss) old = 207.2631 new = 143.0061

I hope this give you an introduction of how to troubleshoot PCoIP performance.

I’ll come back with more information soon.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK