前言
在开始本文之前,先来看看一个Http请求处理的过程,一般情况下是1
2
3
4
5浏览器发送http请求
->建立Socket连接
->通过Socket读取数据
->根据http协议解析数据
->调用后台服务完成响应
详细的流程图如上图所示。Tomcat既是一个HttpServer也是一个Servlet 容器,那么这里必然也涉及到如上过程,首先根据HTTP协议规范解析请求数据,然后将请求转发给Servlet进行处理,因此顺应这样的思路,本文也将从Http协议请求解析,请求如何转发给Servlet两个方面来进行分析。首先来看Http协议请求解析。
Http协议请求解析
Tomcat启动以后,默认情况下会通过org.apache.tomcat.util.net.JIoEndpoint.Acceptor
监听Socket连接,当监听到有Socket连接的时候,就会调用org.apache.tomcat.util.net.JIoEndpoint#processSocket()
进行处理,下面我们就来看看此方法的代码。
org.apache.tomcat.util.net.JIoEndpoint#processSocket()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35/**
* Process a new connection from a new client. Wraps the socket so
* keep-alive and other attributes can be tracked and then passes the socket
* to the executor for processing.
*
* @param socket The socket associated with the client.
*
* @return <code>true</code> if the socket is passed to the
* executor, <code>false</code> if something went wrong or
* if the endpoint is shutting down. Returning
* <code>false</code> is an indication to close the socket
* immediately.
*/
protected boolean processSocket(Socket socket) {
// Process the request from this socket
try {
SocketWrapper<Socket> wrapper = new SocketWrapper<Socket>(socket);
wrapper.setKeepAliveLeft(getMaxKeepAliveRequests());
// During shutdown, executor may be null - avoid NPE
if (!running) {
return false;
}
getExecutor().execute(new SocketProcessor(wrapper));
} catch (RejectedExecutionException x) {
log.warn("Socket processing request was rejected for:"+socket,x);
return false;
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
// This means we got an OOM or similar creating a thread, or that
// the pool and its queue are full
log.error(sm.getString("endpoint.process.fail"), t);
return false;
}
return true;
}
通过上面的代码,可以看出首先将Socket封装为SocketWrapper
,然后通过SocketProcessor
来进行处理,因为Tomcat必然面对用户并发请求,因此这里Socket的处理通过新的线程池来处理。接下来再来看看SocketProcess
的代码,代码如下。
org.apache.tomcat.util.net.JIoEndpoint.SocketProcessor#run()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94/**
* This class is the equivalent of the Worker, but will simply use in an
* external Executor thread pool.
*/
protected class SocketProcessor implements Runnable {
protected SocketWrapper<Socket> socket = null;
protected SocketStatus status = null;
public SocketProcessor(SocketWrapper<Socket> socket) {
if (socket==null) throw new NullPointerException();
this.socket = socket;
}
public SocketProcessor(SocketWrapper<Socket> socket, SocketStatus status) {
this(socket);
this.status = status;
}
@Override
public void run() {
boolean launch = false;
synchronized (socket) {
try {
SocketState state = SocketState.OPEN;
try {
// SSL handshake
serverSocketFactory.handshake(socket.getSocket());
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
if (log.isDebugEnabled()) {
log.debug(sm.getString("endpoint.err.handshake"), t);
}
// Tell to close the socket
state = SocketState.CLOSED;
}
if ((state != SocketState.CLOSED)) {
if (status == null) {
// 1
state = handler.process(socket, SocketStatus.OPEN);
} else {
state = handler.process(socket,status);
}
}
if (state == SocketState.CLOSED) {
// Close socket
if (log.isTraceEnabled()) {
log.trace("Closing socket:"+socket);
}
countDownConnection();
try {
socket.getSocket().close();
} catch (IOException e) {
// Ignore
}
} else if (state == SocketState.OPEN ||
state == SocketState.UPGRADING ||
state == SocketState.UPGRADED){
socket.setKeptAlive(true);
socket.access();
launch = true;
} else if (state == SocketState.LONG) {
socket.access();
waitingRequests.add(socket);
}
} finally {
if (launch) {
try {
getExecutor().execute(new SocketProcessor(socket, SocketStatus.OPEN));
} catch (RejectedExecutionException x) {
log.warn("Socket reprocessing request was rejected for:"+socket,x);
try {
//unable to handle connection at this time
handler.process(socket, SocketStatus.DISCONNECT);
} finally {
countDownConnection();
}
} catch (NullPointerException npe) {
if (running) {
log.error(sm.getString("endpoint.launch.fail"),
npe);
}
}
}
}
}
socket = null;
// Finish up this request
}
}
默认情况下,代码会运行到标注1的地方,标注1的地方又通过org.apache.tomcat.util.net.JIoEndpoint.Handler#process()
进行处理,而通过前面Tomcat启动的文章,已经知道handler属性是在org.apache.coyote.http11.Http11Protocol
的构造方法中初始化的,构造方法如下。
org.apache.coyote.http11.Http11Protocol()1
2
3
4
5
6
7
8public Http11Protocol() {
endpoint = new JIoEndpoint();
cHandler = new Http11ConnectionHandler(this);
((JIoEndpoint) endpoint).setHandler(cHandler);
setSoLinger(Constants.DEFAULT_CONNECTION_LINGER);
setSoTimeout(Constants.DEFAULT_CONNECTION_TIMEOUT);
setTcpNoDelay(Constants.DEFAULT_TCP_NO_DELAY);
}
从构造方法中,可以清楚的看到,其实初始化org.apache.coyote.http11.Http11Protocol.Http11ConnectionHandler
的实例,那么接下来就来看看它的process()
,因为Http11ConnectionHandler
继承org.apache.coyote.AbstractProtocol.AbstractConnectionHandler
,而自己没有实现process()
,因此会调用到父类的process()
,那么接下来我们就来看看AbstractConnectionHandler#process()
,代码如下。1
protected static class Http11ConnectionHandler extends AbstractConnectionHandler<Socket, Http11Processor> implements Handler
org.apache.coyote.AbstractProtocol.AbstractConnectionHandler#process()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105public SocketState process(SocketWrapper<S> socket, SocketStatus status) {
Processor<S> processor = connections.remove(socket.getSocket());
if (status == SocketStatus.DISCONNECT && processor == null) {
//nothing more to be done endpoint requested a close
//and there are no object associated with this connection
return SocketState.CLOSED;
}
socket.setAsync(false);
try {
if (processor == null) {
processor = recycledProcessors.poll();
}
if (processor == null) {
processor = createProcessor(); //子类覆写createProcessor方法
}
initSsl(socket, processor);
SocketState state = SocketState.CLOSED;
do {
if (status == SocketStatus.DISCONNECT &&
!processor.isComet()) {
// Do nothing here, just wait for it to get recycled
// Don't do this for Comet we need to generate an end
// event (see BZ 54022)
} else if (processor.isAsync() ||
state == SocketState.ASYNC_END) {
state = processor.asyncDispatch(status);
} else if (processor.isComet()) {
state = processor.event(status);
} else if (processor.isUpgrade()) {
state = processor.upgradeDispatch();
} else {
state = processor.process(socket);
}
if (state != SocketState.CLOSED && processor.isAsync()) {
state = processor.asyncPostProcess();
}
if (state == SocketState.UPGRADING) {
// Get the UpgradeInbound handler
UpgradeInbound inbound = processor.getUpgradeInbound();
// Release the Http11 processor to be re-used
release(socket, processor, false, false);
// Create the light-weight upgrade processor
processor = createUpgradeProcessor(socket, inbound);
inbound.onUpgradeComplete();
}
} while (state == SocketState.ASYNC_END ||
state == SocketState.UPGRADING);
if (state == SocketState.LONG) {
// In the middle of processing a request/response. Keep the
// socket associated with the processor. Exact requirements
// depend on type of long poll
longPoll(socket, processor);
} else if (state == SocketState.OPEN) {
// In keep-alive but between requests. OK to recycle
// processor. Continue to poll for the next request.
release(socket, processor, false, true);
} else if (state == SocketState.SENDFILE) {
// Sendfile in progress. If it fails, the socket will be
// closed. If it works, the socket will be re-added to the
// poller
release(socket, processor, false, false);
} else if (state == SocketState.UPGRADED) {
// Need to keep the connection associated with the processor
longPoll(socket, processor);
} else {
// Connection closed. OK to recycle the processor.
if (!(processor instanceof UpgradeProcessor)) {
release(socket, processor, true, false);
}
}
return state;
} catch(java.net.SocketException e) {
// SocketExceptions are normal
getLog().debug(sm.getString(
"abstractConnectionHandler.socketexception.debug"), e);
} catch (java.io.IOException e) {
// IOExceptions are normal
getLog().debug(sm.getString(
"abstractConnectionHandler.ioexception.debug"), e);
}
// Future developers: if you discover any other
// rare-but-nonfatal exceptions, catch them here, and log as
// above.
catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
// any other exception or error is odd. Here we log it
// with "ERROR" level, so it will show up even on
// less-than-verbose logs.
getLog().error(
sm.getString("abstractConnectionHandler.error"), e);
}
// Don't try to add upgrade processors back into the pool
if (!(processor instanceof UpgradeProcessor)) {
release(socket, processor, true, false);
}
return SocketState.CLOSED;
}
通过查看上面的代码,默认一个新连接的情况下,会调用org.apache.coyote.Processor#process()
,而Processor
的实例实在org.apache.coyote.AbstractProtocol.AbstractConnectionHandler#createProcessor
中创建的。AbstractConnectionHandler
被Http11ConnectionHandler
继承。
org.apache.coyote.http11.Http11Protocol.Http11ConnectionHandler#createProcessor()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22@Override
protected Http11Processor createProcessor() {
Http11Processor processor = new Http11Processor(proto.getMaxHttpHeaderSize(), (JIoEndpoint)proto.endpoint,proto.getMaxTrailerSize());
processor.setAdapter(proto.adapter);
processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests());
processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
processor.setConnectionUploadTimeout(
proto.getConnectionUploadTimeout());
processor.setDisableUploadTimeout(proto.getDisableUploadTimeout());
processor.setCompressionMinSize(proto.getCompressionMinSize());
processor.setCompression(proto.getCompression());
processor.setNoCompressionUserAgents(proto.getNoCompressionUserAgents());
processor.setCompressableMimeTypes(proto.getCompressableMimeTypes());
processor.setRestrictedUserAgents(proto.getRestrictedUserAgents());
processor.setSocketBuffer(proto.getSocketBuffer());
processor.setMaxSavePostSize(proto.getMaxSavePostSize());
processor.setServer(proto.getServer());
processor.setDisableKeepAlivePercentage(
proto.getDisableKeepAlivePercentage());
register(processor);
return processor;
}
通过查看createProcessor代码,发现是创建一个org.apache.coyote.http11.Http11Processor
的实例,那么接下来,就来看看它的process()
,因为Http11Processor
继承AbstractHttp11Processor
,最终其实调用的是AbstractHttp11Processor#process()
,代码如下。
org.apache.coyote.http11.AbstractHttp11Processor#process()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239/**
* Process pipelined HTTP requests using the specified input and output
* streams.
*
* @param socketWrapper Socket from which the HTTP requests will be read
* and the HTTP responses will be written.
*
* @throws IOException error during an I/O operation
*/
@Override
public SocketState process(SocketWrapper<S> socketWrapper)
throws IOException {
RequestInfo rp = request.getRequestProcessor();
rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);
// Setting up the I/O
// 1
setSocketWrapper(socketWrapper);
getInputBuffer().init(socketWrapper, endpoint);
getOutputBuffer().init(socketWrapper, endpoint);
// Flags
error = false;
keepAlive = true;
comet = false;
openSocket = false;
sendfileInProgress = false;
readComplete = true;
if (endpoint.getUsePolling()) {
keptAlive = false;
} else {
keptAlive = socketWrapper.isKeptAlive();
}
if (disableKeepAlive()) {
socketWrapper.setKeepAliveLeft(0);
}
while (!error && keepAlive && !comet && !isAsync() &&
upgradeInbound == null && !endpoint.isPaused()) {
// Parsing the request header
try {
setRequestLineReadTimeout();
// 2
if (!getInputBuffer().parseRequestLine(keptAlive)) {
if (handleIncompleteRequestLineRead()) {
break;
}
}
if (endpoint.isPaused()) {
// 503 - Service unavailable
response.setStatus(503);
error = true;
} else {
// Make sure that connectors that are non-blocking during
// header processing (NIO) only set the start time the first
// time a request is processed.
if (request.getStartTime() < 0) {
request.setStartTime(System.currentTimeMillis());
}
keptAlive = true;
// Set this every time in case limit has been changed via JMX
request.getMimeHeaders().setLimit(endpoint.getMaxHeaderCount());
// Currently only NIO will ever return false here
// 3
if (!getInputBuffer().parseHeaders()) {
// We've read part of the request, don't recycle it
// instead associate it with the socket
openSocket = true;
readComplete = false;
break;
}
if (!disableUploadTimeout) {
setSocketTimeout(connectionUploadTimeout);
}
}
} catch (IOException e) {
if (getLog().isDebugEnabled()) {
getLog().debug(
sm.getString("http11processor.header.parse"), e);
}
error = true;
break;
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
UserDataHelper.Mode logMode = userDataHelper.getNextMode();
if (logMode != null) {
String message = sm.getString(
"http11processor.header.parse");
switch (logMode) {
case INFO_THEN_DEBUG:
message += sm.getString(
"http11processor.fallToDebug");
//$FALL-THROUGH$
case INFO:
getLog().info(message);
break;
case DEBUG:
getLog().debug(message);
}
}
// 400 - Bad Request
response.setStatus(400);
adapter.log(request, response, 0);
error = true;
}
if (!error) {
// Setting up filters, and parse some request headers
rp.setStage(org.apache.coyote.Constants.STAGE_PREPARE);
try {
prepareRequest();
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
if (getLog().isDebugEnabled()) {
getLog().debug(sm.getString(
"http11processor.request.prepare"), t);
}
// 400 - Internal Server Error
response.setStatus(400);
adapter.log(request, response, 0);
error = true;
}
}
if (maxKeepAliveRequests == 1) {
keepAlive = false;
} else if (maxKeepAliveRequests > 0 &&
socketWrapper.decrementKeepAlive() <= 0) {
keepAlive = false;
}
// Process the request in the adapter
if (!error) {
try {
// 4
rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
adapter.service(request, response);
// Handle when the response was committed before a serious
// error occurred. Throwing a ServletException should both
// set the status to 500 and set the errorException.
// If we fail here, then the response is likely already
// committed, so we can't try and set headers.
if(keepAlive && !error) { // Avoid checking twice.
error = response.getErrorException() != null ||
(!isAsync() &&
statusDropsConnection(response.getStatus()));
}
setCometTimeouts(socketWrapper);
} catch (InterruptedIOException e) {
error = true;
} catch (HeadersTooLargeException e) {
error = true;
// The response should not have been committed but check it
// anyway to be safe
if (!response.isCommitted()) {
response.reset();
response.setStatus(500);
response.setHeader("Connection", "close");
}
} catch (Throwable t) {
ExceptionUtils.handleThrowable(t);
getLog().error(sm.getString(
"http11processor.request.process"), t);
// 500 - Internal Server Error
response.setStatus(500);
adapter.log(request, response, 0);
error = true;
}
}
// Finish the handling of the request
rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);
if (!isAsync() && !comet) {
if (error) {
// If we know we are closing the connection, don't drain
// input. This way uploading a 100GB file doesn't tie up the
// thread if the servlet has rejected it.
getInputBuffer().setSwallowInput(false);
}
endRequest();
}
rp.setStage(org.apache.coyote.Constants.STAGE_ENDOUTPUT);
// If there was an error, make sure the request is counted as
// and error, and update the statistics counter
if (error) {
response.setStatus(500);
}
request.updateCounters();
if (!isAsync() && !comet || error) {
getInputBuffer().nextRequest();
getOutputBuffer().nextRequest();
}
if (!disableUploadTimeout) {
if(endpoint.getSoTimeout() > 0) {
setSocketTimeout(endpoint.getSoTimeout());
} else {
setSocketTimeout(0);
}
}
rp.setStage(org.apache.coyote.Constants.STAGE_KEEPALIVE);
if (breakKeepAliveLoop(socketWrapper)) {
break;
}
}
rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
if (error || endpoint.isPaused()) {
return SocketState.CLOSED;
} else if (isAsync() || comet) {
return SocketState.LONG;
} else if (isUpgrade()) {
return SocketState.UPGRADING;
} else {
if (sendfileInProgress) {
return SocketState.SENDFILE;
} else {
if (openSocket) {
if (readComplete) {
return SocketState.OPEN;
} else {
return SocketState.LONG;
}
} else {
return SocketState.CLOSED;
}
}
}
}
- 标注1的代码:将Socket的输入流和输出流通过
InternalInputBuffer
进行包装,InternalInputBuffer
是在Http11Processor
的构造函数中初始化的。 - 标注2的代码:调用
InternalInputBuffer#parseRequesLine()
解析http请求的请求行。 - 标注3的代码:调用
InternalInputBuffer#prarseHeaders()
解析http请求的请求头。解析完以后,会将http header保存在org.apache.tomcat.util.http.MimeHeaders
。 - 标注4的代码:调用
org.apache.coyote.Adapter#service()
,次方法就会最终调用到具体的Servlet。
对于Http请求行和请求头,大家可以看下面的例子:
Http get request1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17GET /contextpath/querystring HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:23.0) Gecko/20100101 Firefox/23.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: JSESSIONID=9F5897FEF3CDBCB234C050C132DCAE52; __atuvc=384%7C39; __utma=96992031.358732763.1380383869.1381468490.1381554710.38; __utmz=96992031.1380383869.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); Hm_lvt_21e144d0df165d6556d664e2836dadfe=1381330561,1381368826,1381395666,1381554711
Connection: keep-alive
Cache-Control: max-age=0
在上面的Http协议get请求中,其中请求行就是第一行,GET /contextpath/querystring HTTP/1.1
,余下的都是请求头。这里面需要注意根据Http协议的要求,请求行末尾必须是CRLF,而请求行与请求头,以及请求头之间必须用空行隔开,而空行也必须只包含CRLF。对于Http协议请求头的规范可以参考这里。
通过上面的描述,可以整理出如下的一个请求解析流程。
Request http header parse1
2
3
4
5
6
7org.apache.tomcat.util.net.JIoEndpoint.Acceptor#run
->org.apache.tomcat.util.net.JIoEndpoint.SocketProcessor#run(请求处理线程池中运行)
-->org.apache.coyote.AbstractProtocol.AbstractConnectionHandler#process (createProcess)
--->org.apache.coyote.http11.AbstractHttp11Processor#process
---->org.apache.coyote.http11.InternalInputBuffer#parseRequestLine
---->org.apache.coyote.http11.InternalInputBuffer#parseHeaders
---->org.apache.catalina.connector.CoyoteAdapter#service (调用最终servlet)
如何转发到Servlet
一个请求过来是如何根据http协议解析Socket的数据,最终将生成org.apache.coyote.Request
和org.apache.coyote.Response
,接下来我们就来看看request,reponse是如何一步步的进入最终的Servlet进行处理的。这一步的入口就是CoyoteAdapter#service()
。 接下来我们就来看看它的代码。
org.apache.catalina.connector.CoyoteAdapter#service()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111/**
* Service method.
*/
@Override
public void service(org.apache.coyote.Request req,
org.apache.coyote.Response res)
throws Exception {
Request request = (Request) req.getNote(ADAPTER_NOTES);
Response response = (Response) res.getNote(ADAPTER_NOTES);
// 1
if (request == null) {
// Create objects
request = connector.createRequest();
request.setCoyoteRequest(req);
response = connector.createResponse();
response.setCoyoteResponse(res);
// Link objects
request.setResponse(response);
response.setRequest(request);
// Set as notes
req.setNote(ADAPTER_NOTES, request);
res.setNote(ADAPTER_NOTES, response);
// Set query string encoding
req.getParameters().setQueryStringEncoding
(connector.getURIEncoding());
}
if (connector.getXpoweredBy()) {
response.addHeader("X-Powered-By", POWERED_BY);
}
boolean comet = false;
boolean async = false;
try {
// Parse and set Catalina and configuration specific
// request parameters
req.getRequestProcessor().setWorkerThreadName(Thread.currentThread().getName());
// 2
boolean postParseSuccess = postParseRequest(req, request, res, response);
if (postParseSuccess) {
//check valves if we support async
request.setAsyncSupported(connector.getService().getContainer().getPipeline().isAsyncSupported());
// Calling the container
// 3
connector.getService().getContainer().getPipeline().getFirst().invoke(request, response); //终于参数传递给servlet
if (request.isComet()) {
if (!response.isClosed() && !response.isError()) {
if (request.getAvailable() || (request.getContentLength() > 0 && (!request.isParametersParsed()))) {
// Invoke a read event right away if there are available bytes
if (event(req, res, SocketStatus.OPEN)) {
comet = true;
res.action(ActionCode.COMET_BEGIN, null);
}
} else {
comet = true;
res.action(ActionCode.COMET_BEGIN, null);
}
} else {
// Clear the filter chain, as otherwise it will not be reset elsewhere
// since this is a Comet request
request.setFilterChain(null);
}
}
}
AsyncContextImpl asyncConImpl = (AsyncContextImpl)request.getAsyncContext();
if (asyncConImpl != null) {
async = true;
} else if (!comet) {
request.finishRequest();
response.finishResponse();
if (postParseSuccess &&
request.getMappingData().context != null) {
// Log only if processing was invoked.
// If postParseRequest() failed, it has already logged it.
// If context is null this was the start of a comet request
// that failed and has already been logged.
((Context) request.getMappingData().context).logAccess(
request, response,
System.currentTimeMillis() - req.getStartTime(),
false);
}
req.action(ActionCode.POST_REQUEST , null);
}
} catch (IOException e) {
// Ignore
} finally {
req.getRequestProcessor().setWorkerThreadName(null);
// Recycle the wrapper request and response
if (!comet && !async) {
request.recycle();
response.recycle();
} else {
// Clear converters so that the minimum amount of memory
// is used by this processor
request.clearEncoders();
response.clearEncoders();
}
}
}
- 标注1的代码:将
org.apache.coyote.Request
和org.apache.coyote.Response
对象转变为org.apache.catalina.connector.Request
,org.apache.catalina.connector.Response
类型的对象。其中coyote包中的Request仅仅只是包含解析出来的http协议的数据,而connector包中的Request才是真正Servlet容器中的HttpServletRequest,它里面包含完成请求需要的host
,context
和wrapper
信息,在这里每一个wrapper
其实都对应web.xml
配置的一个Servlet。 - 标注2的代码:调用
postParseRequest()
,这个方法里面做的事情非常多,但是最终都是为根据Request对象找到对应的Host
,Conext
和Wrapper
对象,也就是说最终要清楚这个请求应该由哪个Servlet来处理。 - 标注3的代码:将已经设置好
Host
,Context
,Wrapper
对象的Request通过Pipeline
机制链式传递给最终的Servlet。
上面只是从整体上说明org.apache.catalina.connector.CoyoteAdapter#service
方法做的事情,接下来进一步分解每一个步骤都具体做哪些工作。第一步比较简单,可以自己阅读,关键来看2,3步。首先来看看postParseRequest()
。
org.apache.catalina.connector.CoyoteAdapter#postParseRequest()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256/**
* Parse additional request parameters.
*/
protected boolean postParseRequest(org.apache.coyote.Request req,
Request request,
org.apache.coyote.Response res,
Response response)
throws Exception {
// XXX the processor may have set a correct scheme and port prior to this point,
// in ajp13 protocols dont make sense to get the port from the connector...
// otherwise, use connector configuration
if (! req.scheme().isNull()) {
// use processor specified scheme to determine secure state
request.setSecure(req.scheme().equals("https"));
} else {
// use connector scheme and secure configuration, (defaults to
// "http" and false respectively)
req.scheme().setString(connector.getScheme());
request.setSecure(connector.getSecure());
}
// FIXME: the code below doesnt belongs to here,
// this is only have sense
// in Http11, not in ajp13..
// At this point the Host header has been processed.
// Override if the proxyPort/proxyHost are set
String proxyName = connector.getProxyName();
int proxyPort = connector.getProxyPort();
if (proxyPort != 0) {
req.setServerPort(proxyPort);
}
if (proxyName != null) {
req.serverName().setString(proxyName);
}
// Copy the raw URI to the decodedURI
MessageBytes decodedURI = req.decodedURI();
decodedURI.duplicate(req.requestURI());
// Parse the path parameters. This will:
// - strip out the path parameters
// - convert the decodedURI to bytes
parsePathParameters(req, request);
// URI decoding
// %xx decoding of the URL
try {
req.getURLDecoder().convert(decodedURI, false);
} catch (IOException ioe) {
res.setStatus(400);
res.setMessage("Invalid URI: " + ioe.getMessage());
connector.getService().getContainer().logAccess(
request, response, 0, true);
return false;
}
// Normalization
if (!normalize(req.decodedURI())) {
res.setStatus(400);
res.setMessage("Invalid URI");
connector.getService().getContainer().logAccess(
request, response, 0, true);
return false;
}
// Character decoding
convertURI(decodedURI, request);
// Check that the URI is still normalized
if (!checkNormalize(req.decodedURI())) {
res.setStatus(400);
res.setMessage("Invalid URI character encoding");
connector.getService().getContainer().logAccess(
request, response, 0, true);
return false;
}
// Set the remote principal
String principal = req.getRemoteUser().toString();
if (principal != null) {
request.setUserPrincipal(new CoyotePrincipal(principal));
}
// Set the authorization type
String authtype = req.getAuthType().toString();
if (authtype != null) {
request.setAuthType(authtype);
}
// Request mapping.
MessageBytes serverName;
if (connector.getUseIPVHosts()) {
serverName = req.localName();
if (serverName.isNull()) {
// well, they did ask for it
res.action(ActionCode.REQ_LOCAL_NAME_ATTRIBUTE, null);
}
} else {
serverName = req.serverName();
}
if (request.isAsyncStarted()) {
//TODO SERVLET3 - async
//reset mapping data, should prolly be done elsewhere
request.getMappingData().recycle();
}
boolean mapRequired = true;
String version = null;
while (mapRequired) {
if (version != null) {
// Once we have a version - that is it
mapRequired = false;
}
// This will map the the latest version by default
connector.getMapper().map(serverName, decodedURI, version,
request.getMappingData());
request.setContext((Context) request.getMappingData().context);
request.setWrapper((Wrapper) request.getMappingData().wrapper);
// Single contextVersion therefore no possibility of remap
if (request.getMappingData().contexts == null) {
mapRequired = false;
}
// If there is no context at this point, it is likely no ROOT context
// has been deployed
if (request.getContext() == null) {
res.setStatus(404);
res.setMessage("Not found");
// No context, so use host
Host host = request.getHost();
// Make sure there is a host (might not be during shutdown)
if (host != null) {
host.logAccess(request, response, 0, true);
}
return false;
}
// Now we have the context, we can parse the session ID from the URL
// (if any). Need to do this before we redirect in case we need to
// include the session id in the redirect
String sessionID = null;
if (request.getServletContext().getEffectiveSessionTrackingModes()
.contains(SessionTrackingMode.URL)) {
// Get the session ID if there was one
sessionID = request.getPathParameter(
SessionConfig.getSessionUriParamName(
request.getContext()));
if (sessionID != null) {
request.setRequestedSessionId(sessionID);
request.setRequestedSessionURL(true);
}
}
// Look for session ID in cookies and SSL session
parseSessionCookiesId(req, request);
parseSessionSslId(request);
sessionID = request.getRequestedSessionId();
if (mapRequired) {
if (sessionID == null) {
// No session means no possibility of needing to remap
mapRequired = false;
} else {
// Find the context associated with the session
Object[] objs = request.getMappingData().contexts;
for (int i = (objs.length); i > 0; i--) {
Context ctxt = (Context) objs[i - 1];
if (ctxt.getManager().findSession(sessionID) != null) {
// Was the correct context already mapped?
if (ctxt.equals(request.getMappingData().context)) {
mapRequired = false;
} else {
// Set version so second time through mapping the
// correct context is found
version = ctxt.getWebappVersion();
// Reset mapping
request.getMappingData().recycle();
break;
}
}
}
if (version == null) {
// No matching context found. No need to re-map
mapRequired = false;
}
}
}
if (!mapRequired && request.getContext().getPaused()) {
// Found a matching context but it is paused. Mapping data will
// be wrong since some Wrappers may not be registered at this
// point.
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
// Should never happen
}
// Reset mapping
request.getMappingData().recycle();
mapRequired = true;
}
}
// Possible redirect
MessageBytes redirectPathMB = request.getMappingData().redirectPath;
if (!redirectPathMB.isNull()) {
String redirectPath = urlEncoder.encode(redirectPathMB.toString());
String query = request.getQueryString();
if (request.isRequestedSessionIdFromURL()) {
// This is not optimal, but as this is not very common, it
// shouldn't matter
redirectPath = redirectPath + ";" +
SessionConfig.getSessionUriParamName(
request.getContext()) +
"=" + request.getRequestedSessionId();
}
if (query != null) {
// This is not optimal, but as this is not very common, it
// shouldn't matter
redirectPath = redirectPath + "?" + query;
}
response.sendRedirect(redirectPath);
request.getContext().logAccess(request, response, 0, true);
return false;
}
// Filter trace method
if (!connector.getAllowTrace()
&& req.method().equalsIgnoreCase("TRACE")) {
Wrapper wrapper = request.getWrapper();
String header = null;
if (wrapper != null) {
String[] methods = wrapper.getServletMethods();
if (methods != null) {
for (int i=0; i<methods.length; i++) {
if ("TRACE".equals(methods[i])) {
continue;
}
if (header == null) {
header = methods[i];
} else {
header += ", " + methods[i];
}
}
}
}
res.setStatus(405);
res.addHeader("Allow", header);
res.setMessage("TRACE method is not allowed");
request.getContext().logAccess(request, response, 0, true);
return false;
}
return true;
}
通过分析org.apache.catalina.connector.CoyoteAdapter#postParseRequest()
的代码,我们会发现它最终是通过org.apache.tomcat.util.http.mapper.Mapper#map()
来达到匹配请求到对应的Context和Wrapper(Servlet包装类)目的。1
2
3
4// This will map the the latest version by default
connector.getMapper().map(serverName, decodedURI, version, request.getMappingData());
request.setContext((Context) request.getMappingData().context);
request.setWrapper((Wrapper) request.getMappingData().wrapper);
org.apache.tomcat.util.http.mapper#map()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19/**
* Map the specified host name and URI, mutating the given mapping data.
*
* @param host Virtual host name
* @param uri URI
* @param mappingData This structure will contain the result of the mapping
* operation
*/
public void map(MessageBytes host, MessageBytes uri, String version,
MappingData mappingData)
throws Exception {
if (host.isNull()) {
host.getCharChunk().append(defaultHostName);
}
host.toChars();
uri.toChars();
internalMap(host.getCharChunk(), uri.getCharChunk(), version, mappingData);
}
通过分析它的代码,发现最终其实是调用几个internalMap()
将找到的Context,Wrapper设置到org.apache.catalina.connector.Request
对象的org.apache.tomcat.util.http.mapper.MappingData
类型的属性中,map()
执行完以后,然后接下来就从MappingData
中获取已经找到的Context和Wrapper,再设置到Request的context和wrapper中。
第3步通过pipeline
链式调用机制最终调用Servlet对象,而对于pipeline
其实是运用责任链模式,它将各个阀门链接起来,然后一步步的调用,而至于有多少个阀门(Valve)对象,主要来源于两个地方,一个是conf/server.xml
中配置的valve,所有的容器都是支持pipeline
机制的,另外一个就是每一个容器的构造其中自己初始化的阀门对象。
对于StandardEngine
来说有一个与之对应的StandardEngineValve
,对于StandardHost
有一个StandardHostValve
与之对应,StandardContext
有一个StandardContextValve
与之对应,StandardWrapper
与StandardWrapperValve
对应,通过分析代码,可以得到如下的一个调用链。1
2
3
4
5
6
7->org.apache.catalina.core.StandardEngineValve#invoke
-->org.apache.catalina.valves.AccessLogValve#invoke
--->org.apache.catalina.valves.ErrorReportValve#invoke
---->org.apache.catalina.core.StandardHostValve#invoke
----->org.apache.catalina.authenticator.AuthenticatorBase#invoke
------>org.apache.catalina.core.StandardContextValve#invoke
------->org.apache.catalina.core.StandardWrapperValve#invoke
上述的调用栈中,最后会调用到StandardWrapperValve,它其实也是最终调用Servlet的地方,接下来就来看看它的代码。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244/**
* Invoke the servlet we are managing, respecting the rules regarding
* servlet lifecycle and SingleThreadModel support.
*
* @param request Request to be processed
* @param response Response to be produced
*
* @exception IOException if an input/output error occurred
* @exception ServletException if a servlet error occurred
*/
@Override
public final void invoke(Request request, Response response)
throws IOException, ServletException {
// Initialize local variables we may need
boolean unavailable = false;
Throwable throwable = null;
// This should be a Request attribute...
long t1=System.currentTimeMillis();
requestCount++;
StandardWrapper wrapper = (StandardWrapper) getContainer();
Servlet servlet = null;
Context context = (Context) wrapper.getParent();
// Check for the application being marked unavailable
if (!context.getState().isAvailable()) {
response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
sm.getString("standardContext.isUnavailable"));
unavailable = true;
}
// Check for the servlet being marked unavailable
if (!unavailable && wrapper.isUnavailable()) {
container.getLogger().info(sm.getString("standardWrapper.isUnavailable",
wrapper.getName()));
long available = wrapper.getAvailable();
if ((available > 0L) && (available < Long.MAX_VALUE)) {
response.setDateHeader("Retry-After", available);
response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
sm.getString("standardWrapper.isUnavailable",
wrapper.getName()));
} else if (available == Long.MAX_VALUE) {
response.sendError(HttpServletResponse.SC_NOT_FOUND,
sm.getString("standardWrapper.notFound",
wrapper.getName()));
}
unavailable = true;
}
// Allocate a servlet instance to process this request
try {
// 1
if (!unavailable) {
servlet = wrapper.allocate();
}
} catch (UnavailableException e) {
container.getLogger().error(
sm.getString("standardWrapper.allocateException",
wrapper.getName()), e);
long available = wrapper.getAvailable();
if ((available > 0L) && (available < Long.MAX_VALUE)) {
response.setDateHeader("Retry-After", available);
response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
sm.getString("standardWrapper.isUnavailable",
wrapper.getName()));
} else if (available == Long.MAX_VALUE) {
response.sendError(HttpServletResponse.SC_NOT_FOUND,
sm.getString("standardWrapper.notFound",
wrapper.getName()));
}
} catch (ServletException e) {
container.getLogger().error(sm.getString("standardWrapper.allocateException",
wrapper.getName()), StandardWrapper.getRootCause(e));
throwable = e;
exception(request, response, e);
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
container.getLogger().error(sm.getString("standardWrapper.allocateException",
wrapper.getName()), e);
throwable = e;
exception(request, response, e);
servlet = null;
}
// Identify if the request is Comet related now that the servlet has been allocated
boolean comet = false;
if (servlet instanceof CometProcessor && request.getAttribute(
Globals.COMET_SUPPORTED_ATTR) == Boolean.TRUE) {
comet = true;
request.setComet(true);
}
MessageBytes requestPathMB = request.getRequestPathMB();
DispatcherType dispatcherType = DispatcherType.REQUEST;
if (request.getDispatcherType()==DispatcherType.ASYNC) dispatcherType = DispatcherType.ASYNC;
request.setAttribute(Globals.DISPATCHER_TYPE_ATTR,dispatcherType);
request.setAttribute(Globals.DISPATCHER_REQUEST_PATH_ATTR,
requestPathMB);
// Create the filter chain for this request
ApplicationFilterFactory factory =
ApplicationFilterFactory.getInstance();
ApplicationFilterChain filterChain =
factory.createFilterChain(request, wrapper, servlet);
// Reset comet flag value after creating the filter chain
request.setComet(false);
// Call the filter chain for this request
// NOTE: This also calls the servlet's service() method
// 2
try {
if ((servlet != null) && (filterChain != null)) {
// Swallow output if needed
if (context.getSwallowOutput()) {
try {
SystemLogHandler.startCapture();
if (request.isAsyncDispatching()) {
//TODO SERVLET3 - async
((AsyncContextImpl)request.getAsyncContext()).doInternalDispatch();
} else if (comet) {
filterChain.doFilterEvent(request.getEvent());
request.setComet(true);
} else {
filterChain.doFilter(request.getRequest(),
response.getResponse());
}
} finally {
String log = SystemLogHandler.stopCapture();
if (log != null && log.length() > 0) {
context.getLogger().info(log);
}
}
} else {
if (request.isAsyncDispatching()) {
//TODO SERVLET3 - async
((AsyncContextImpl)request.getAsyncContext()).doInternalDispatch();
} else if (comet) {
request.setComet(true);
filterChain.doFilterEvent(request.getEvent());
} else {
filterChain.doFilter
(request.getRequest(), response.getResponse());
}
}
}
} catch (ClientAbortException e) {
throwable = e;
exception(request, response, e);
} catch (IOException e) {
container.getLogger().error(sm.getString(
"standardWrapper.serviceException", wrapper.getName(),
context.getName()), e);
throwable = e;
exception(request, response, e);
} catch (UnavailableException e) {
container.getLogger().error(sm.getString(
"standardWrapper.serviceException", wrapper.getName(),
context.getName()), e);
// throwable = e;
// exception(request, response, e);
wrapper.unavailable(e);
long available = wrapper.getAvailable();
if ((available > 0L) && (available < Long.MAX_VALUE)) {
response.setDateHeader("Retry-After", available);
response.sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE,
sm.getString("standardWrapper.isUnavailable",
wrapper.getName()));
} else if (available == Long.MAX_VALUE) {
response.sendError(HttpServletResponse.SC_NOT_FOUND,
sm.getString("standardWrapper.notFound",
wrapper.getName()));
}
// Do not save exception in 'throwable', because we
// do not want to do exception(request, response, e) processing
} catch (ServletException e) {
Throwable rootCause = StandardWrapper.getRootCause(e);
if (!(rootCause instanceof ClientAbortException)) {
container.getLogger().error(sm.getString(
"standardWrapper.serviceExceptionRoot",
wrapper.getName(), context.getName(), e.getMessage()),
rootCause);
}
throwable = e;
exception(request, response, e);
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
container.getLogger().error(sm.getString(
"standardWrapper.serviceException", wrapper.getName(),
context.getName()), e);
throwable = e;
exception(request, response, e);
}
// Release the filter chain (if any) for this request
if (filterChain != null) {
if (request.isComet()) {
// If this is a Comet request, then the same chain will be used for the
// processing of all subsequent events.
filterChain.reuse();
} else {
filterChain.release();
}
}
// Deallocate the allocated servlet instance
try {
if (servlet != null) {
wrapper.deallocate(servlet);
}
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
container.getLogger().error(sm.getString("standardWrapper.deallocateException",
wrapper.getName()), e);
if (throwable == null) {
throwable = e;
exception(request, response, e);
}
}
// If this servlet has been marked permanently unavailable,
// unload it and release this instance
try {
if ((servlet != null) &&
(wrapper.getAvailable() == Long.MAX_VALUE)) {
wrapper.unload();
}
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
container.getLogger().error(sm.getString("standardWrapper.unloadException",
wrapper.getName()), e);
if (throwable == null) {
throwable = e;
exception(request, response, e);
}
}
long t2=System.currentTimeMillis();
long time=t2-t1;
processingTime += time;
if( time > maxTime) maxTime=time;
if( time < minTime) minTime=time;
}
- 标注1的代码:实例化Servlet对象,在实例化的过程中使用Java双检查锁的机制来实例化Servlet。这里需要注意的是在Servlet2.4规范之前,有一个singleThreadMode模型,这个机制类似与之前EJB的无状态会话Bean机制,每个线程过来会通过实例池中取出一个实例来完成响应。在Servlet规范2.4之后,单线程模型已经被废除。
org.apache.catalina.core.StandardWrapper#allocate()1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109/**
* Allocate an initialized instance of this Servlet that is ready to have
* its <code>service()</code> method called. If the servlet class does
* not implement <code>SingleThreadModel</code>, the (only) initialized
* instance may be returned immediately. If the servlet class implements
* <code>SingleThreadModel</code>, the Wrapper implementation must ensure
* that this instance is not allocated again until it is deallocated by a
* call to <code>deallocate()</code>.
*
* @exception ServletException if the servlet init() method threw
* an exception
* @exception ServletException if a loading error occurs
*/
@Override
public Servlet allocate() throws ServletException {
// If we are currently unloading this servlet, throw an exception
if (unloading)
throw new ServletException
(sm.getString("standardWrapper.unloading", getName()));
boolean newInstance = false;
// If not SingleThreadedModel, return the same instance every time
if (!singleThreadModel) {
// Load and initialize our instance if necessary
if (instance == null) {
synchronized (this) {
if (instance == null) {
try {
if (log.isDebugEnabled())
log.debug("Allocating non-STM instance");
instance = loadServlet();
if (!singleThreadModel) {
// For non-STM, increment here to prevent a race
// condition with unload. Bug 43683, test case
// #3
newInstance = true;
countAllocated.incrementAndGet();
}
} catch (ServletException e) {
throw e;
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
throw new ServletException
(sm.getString("standardWrapper.allocate"), e);
}
}
}
}
if (!instanceInitialized) {
initServlet(instance);
}
if (singleThreadModel) {
if (newInstance) {
// Have to do this outside of the sync above to prevent a
// possible deadlock
synchronized (instancePool) {
instancePool.push(instance);
nInstances++;
}
}
} else {
if (log.isTraceEnabled())
log.trace(" Returning non-STM instance");
// For new instances, count will have been incremented at the
// time of creation
if (!newInstance) {
countAllocated.incrementAndGet();
}
return (instance);
}
}
synchronized (instancePool) {
while (countAllocated.get() >= nInstances) {
// Allocate a new instance if possible, or else wait
if (nInstances < maxInstances) {
try {
instancePool.push(loadServlet());
nInstances++;
} catch (ServletException e) {
throw e;
} catch (Throwable e) {
ExceptionUtils.handleThrowable(e);
throw new ServletException
(sm.getString("standardWrapper.allocate"), e);
}
} else {
try {
instancePool.wait();
} catch (InterruptedException e) {
// Ignore
}
}
}
if (log.isTraceEnabled())
log.trace(" Returning allocated STM instance");
countAllocated.incrementAndGet();
return instancePool.pop();
}
}
- 标注2的代码:其实调用熟悉的Servlet的过滤器链,过滤器链最终就会调用到Servlet。
最后,看看过滤器滤链的处理,来看看org.apache.catalina.core.ApplicationFilterChain#doFilter
,doFilter()
中会根据filterConfig中取的web.xml
配置的过滤器,然后一个个调用,等每个过滤器执行完以后,最终就会调用到Servlet#Service()
。
通过上面的分析,其实已经清楚一个请求过来以后,Tomcat是如何一步步处理的。再来做一个总体的总结。
- 用户浏览器发送请求,请求会发送到对应的
Connector
监听的Socket端口。 Connector
从Socket流中获取数据,然后根据Http协议将其解析为Request和Reponse对象。- 找到Request对象对应的
Host
,Context
,Wrapper
。 - 调用最终的
Servelt#service()
进行处理。