問題描述
Edit(2): 現在使用 db-mysql 和 generic-pool 模塊.錯誤率大幅下降,徘徊在 13%,但吞吐量仍然在 100 req/sec 左右.
Edit(1): 在有人建議 ORDER BY RAND() 會導致 MySQL 變慢后,我從查詢中刪除了該子句.Node.js 現在徘徊在 100 請求/秒左右,但服務器仍然報告連接錯誤:連接過多".
Node.js 或 Lighttpd 與 PHP?
您可能看到過許多 node.js 的Hello World"基準測試……但是hello world"測試,即使是每個請求延遲 2 秒的測試,甚至都沒有接近實際生產使用情況.我還使用 node.js 執行了Hello World"測試的這些變體,看到吞吐量約為 800 req/sec,錯誤率為 0.01%.但是,我決定進行一些更現實的測試.
也許我的測試不完整,很可能 node.js 或我的測試代碼真的有問題,所以如果你是 node.js 專家,請幫助我寫一些更好的測試.我的結果發表在下面.我使用 Apache JMeter 進行測試.
測試用例和系統規范
測試非常簡單.對用戶數的 mysql 查詢是隨機排序的.檢索并顯示第一個用戶的用戶名.mysql 數據庫連接是通過 unix 套接字連接的.操作系統是 FreeBSD 8+.8GB 內存.英特爾至強四核 2.x Ghz 處理器.在我遇到 node.js 之前,我稍微調整了 Lighttpd 配置.
Apache JMeter 設置
線程數(用戶):5000 我相信這是并發連接數
加速時間(以秒為單位):1
循環次數:10 這是每個用戶的請求數
Apache JMeter 最終結果
<前>標簽 |# 樣品 |平均 |敏|最大 |標準開發.|錯誤% |吞吐量 |KB/秒 |平均字節HTTP 請求 Lighttpd |49918 |2060ms |29 毫秒 |84790ms |5524 |19.47% |583.3/秒 |211.79 |371.8HTTP 請求 Node.js |13767 |106569ms |295ms |292311ms |91764 |78.86% |44.6/秒 |79.16 |1816結果結論
Node.js 太糟糕了,我不得不提前停止測試.[已修復已完全測試]
Node.js 在服務器上報告連接錯誤:連接過多".[已修正]
大多數時候,Lighttpd 的吞吐量約為 1200 請求/秒.
然而,node.js 的吞吐量約為 29 req/sec.[已修復現在是 100req/sec]
這是我用于 node.js 的代碼(使用 MySQL 池)
var cluster = require('cluster'), http = require('http'), mysql = require('db-mysql'), generic_pool = require('generic-pool');var pool = generic_pool.Pool({名稱:'mysql',最大:10,創建:函數(回調){新的 mysql.Database({套接字:/tmp/mysql.sock",用戶:'根',密碼:'密碼',數據庫:'v3edb2011'}).connect(function(err, server) {回調(錯誤,這個);});},銷毀:函數(數據庫){db.disconnect();}});var server = http.createServer(function(request, response) {response.writeHead(200, {"Content-Type": "text/html"});pool.acquire(function(err, db) {如果(錯誤){return response.end("連接錯誤:" + err);}db.query('SELECT * FROM tb_users').execute(function(err, rows, columns) {pool.release(db);如果(錯誤){return response.end("查詢錯誤:" + err);}response.write(rows.length + ' ROWS found using node.js<br/>');response.end(rows[0]["username"]);});});});集群(服務器).set('工人', 5).聽(8080);
這是我用于 PHP (Lighttpd + FastCGI) 的代碼
query('SELECT * FROM tb_users ORDER BY RAND()');如果($結果){echo ($result->num_rows).'使用 Lighttpd + PHP (FastCGI) 找到的 ROWS<br/>';$row = $result->fetch_assoc();回聲 $row['用戶名'];} 別的 {echo '錯誤:數據庫查詢';}} 別的 {echo '錯誤:數據庫連接';}?>
這是一個糟糕的基準比較.在 node.js 中,您選擇整個表并將其放入一個數組中.在 php 中,您只解析第一行.所以你的表越大,節點看起來就越慢.如果您讓 php 使用 mysqli_fetch_all,那將是類似的比較.雖然 db-mysql 應該很快,但它的功能不是很全,并且缺乏將其進行公平比較的能力.使用不同的 node.js 模塊,如 node-mysql-libmysqlclient 應該允許您只處理第一行.
Edit(2): Now using db-mysql with generic-pool module. The error rate has dropped significantly and hovers at 13% but the throughput is still around 100 req/sec.
Edit(1): After someone suggesting that ORDER BY RAND() would cause MySQL to be slow, I had removed that clause from the query. Node.js now hovers around 100 req/sec but still the server reports 'CONNECTION error: Too many connections'.
Node.js or Lighttpd with PHP?
You probably saw many "Hello World" benchmarking of node.js... but "hello world" tests, even those that were delayed by 2 seconds per request, are not even close to real world production usage. I also performed those variations of "Hello World" tests using node.js and saw throughput of about 800 req/sec with 0.01% error rate. However, I decided to some tests that were a bit more realistic.
Maybe my tests are not complete, most likely something is REALLY wrong about node.js or my test code and so if your a node.js expert, please do help me write some better tests. My results are published below. I used Apache JMeter to do the testing.
Test Case and System Specs
The test is pretty simple. A mysql query for number of users is ordered randomly. The first user's username is retrieved and displayed. The mysql database connection is through a unix socket. The OS is FreeBSD 8+. 8GB of RAM. Intel Xeon Quad Core 2.x Ghz processor. I tuned the Lighttpd configurations a bit before i even came across node.js.
Apache JMeter Settings
Number of threads (users) : 5000 I believe this is the number of concurrent connections
Ramp up period (in seconds) : 1
Loop Count : 10 This is the number of requests per user
Apache JMeter End Results
Label | # Samples | Average | Min | Max | Std. Dev. | Error % | Throughput | KB/sec | Avg. Bytes HTTP Requests Lighttpd | 49918 | 2060ms | 29ms | 84790ms | 5524 | 19.47% | 583.3/sec | 211.79 | 371.8 HTTP Requests Node.js | 13767 | 106569ms | 295ms | 292311ms | 91764 | 78.86% | 44.6/sec | 79.16 | 1816
Result Conclusions
Node.js was so bad i had to stop the test early. [Fixed Tested completely]
Node.js reports "CONNECTION error: Too many connections" on the server. [Fixed]
Most of the time, Lighttpd had a throughput of about 1200 req/sec.
However, node.js had a throughput of about 29 req/sec. [Fixed Now at 100req/sec]
This is the code i used for node.js (Using MySQL pools)
var cluster = require('cluster'), http = require('http'), mysql = require('db-mysql'), generic_pool = require('generic-pool');
var pool = generic_pool.Pool({
name: 'mysql',
max: 10,
create: function(callback) {
new mysql.Database({
socket: "/tmp/mysql.sock",
user: 'root',
password: 'password',
database: 'v3edb2011'
}).connect(function(err, server) {
callback(err, this);
});
},
destroy: function(db) {
db.disconnect();
}
});
var server = http.createServer(function(request, response) {
response.writeHead(200, {"Content-Type": "text/html"});
pool.acquire(function(err, db) {
if (err) {
return response.end("CONNECTION error: " + err);
}
db.query('SELECT * FROM tb_users').execute(function(err, rows, columns) {
pool.release(db);
if (err) {
return response.end("QUERY ERROR: " + err);
}
response.write(rows.length + ' ROWS found using node.js<br />');
response.end(rows[0]["username"]);
});
});
});
cluster(server)
.set('workers', 5)
.listen(8080);
This this is the code i used for PHP (Lighttpd + FastCGI)
<?php
$conn = new mysqli('localhost', 'root', 'password', 'v3edb2011');
if($conn) {
$result = $conn->query('SELECT * FROM tb_users ORDER BY RAND()');
if($result) {
echo ($result->num_rows).' ROWS found using Lighttpd + PHP (FastCGI)<br />';
$row = $result->fetch_assoc();
echo $row['username'];
} else {
echo 'Error : DB Query';
}
} else {
echo 'Error : DB Connection';
}
?>
This is a bad benchmark comparison. In node.js your selecting the whole table and putting it in an array. In php your only parsing the first row. So the bigger your table is the slower node will look. If you made php use mysqli_fetch_all it would be a similar comparison. While db-mysql is supposed to be fast it's not very full featured and lacks the ability to make this a fair comparison. Using a different node.js module like node-mysql-libmysqlclient should allow you to only process the first row.
這篇關于使用 mysql 池對 node.js(集群)的性能進行基準測試:Lighttpd + PHP?的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!