1、说明
下面代码是netty服务端的样板代码,本篇博文以上面代码作为分析入口。
package com.mzj.netty.mynettysrc;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.string.StringDecoder;
import io.netty.handler.codec.string.StringEncoder;
public class NettyServer {
public void start(int port) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new StringDecoder());
pipeline.addLast(new StringEncoder());
pipeline.addLast(new ChatServerHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
System.out.println("服务已启动,监听端口" + port + "");
// 绑定端口,开始接收连接请求
ChannelFuture f = b.bind(port).sync();
// 等待服务器 socket 关闭 。
f.channel().closeFuture().sync();
} finally {
workerGroup.shutdownGracefully();
bossGroup.shutdownGracefully();
System.out.println("服务已关闭");
}
}
private class ChatServerHandler implements ChannelHandler {
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
}
@Override
public void handlerRemoved(ChannelHandlerContext ctx) throws Exception {
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
}
}
public static void main(String[] args) {
try {
new NettyServer().start(8080);
} catch (Exception e) {
e.printStackTrace();
}
}
}
服务端写法与客户端基本相同,区别有:
- 两个事件循环线程组EventLoopGroup
- Channel类型区别:服务端是NioServerSocketChannel
- 配置自己服务端的业务处理器Handler
2、分析
2.1 事件循环线程组的创建
1、说明
服务端需要指定两个EventLoopGroup,一个是bossGroup、一个是workerGroup
- bossGroup:用于接收客户端的连接请求,即bossGroup只用于服务端的accept
- workGroup:用于处理与各个客户端的I/O操作
- 首先,服务端的bossGroup不断地监听是否有客户端连接,当发现有一个新的客户端连接到来时,bossGroup就会为此连接初始化各项资源;
- 然后,从workerGroup中选出一个EventLoop绑定到此客户端连接
- 接下来,服务端与客户端的交互过程将全部在分配的EventLoop中完成
2、用户代码
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
3、源码分析
见我的另一篇文章:《netty 之 源码分析 之 客户端启动过程分析》
2.1.1 bossGroup与workerGroup
1、用户代码:
b.group(bossGroup, workerGroup)
2、源码分析:
进入上述代码源码如下(仅保留关键代码):
public class ServerBootstrap extends AbstractBootstrap<ServerBootstrap, ServerChannel> {
public ServerBootstrap group(EventLoopGroup parentGroup, EventLoopGroup childGroup) {
super.group(parentGroup);
this.childGroup = childGroup;
return this;
}
}
- 将bossGroup通过super.group(parentGroup);设置了其父类AbstractBootstrap的group属性
- 将workerGroup设置childGroup属性
然后,在用户代码中通过b.bind执行到AbstractBootstrap的initAndRegister方法时(省略非关键代码):
public abstract class AbstractBootstrap<B extends AbstractBootstrap<B, C>, C extends Channel> implements Cloneable {
final ChannelFuture initAndRegister() {
Channel channel = channelFactory.newChannel();
init(channel);
ChannelFuture regFuture = config().group().register(channel);
return regFuture;
}
}
这里的group方法返回的是bossGroup,而调用bossGroup的register方法传递的channel参数是NioServerSocketChannel实例;
因此,这里是将【bossGroup】与【NioServerSocketChannel】实例关联在一起!!
再看上面代码的init(channel)方法,当然看ServerBootstrap实现(仅保留关键代码):
@Override
void init(Channel channel) throws Exception {
ChannelPipeline p = channel.pipeline();
final EventLoopGroup currentChildGroup = childGroup;
final ChannelHandler currentChildHandler = childHandler;
final Entry<ChannelOption<?>, Object>[] currentChildOptions;
final Entry<AttributeKey<?>, Object>[] currentChildAttrs;
p.addLast(new ChannelInitializer<Channel>() {
@Override
public void initChannel(final Channel ch) throws Exception {
final ChannelPipeline pipeline = ch.pipeline();
ChannelHandler handler = config.handler();
if (handler != null) {
pipeline.addLast(handler);
}
ch.eventLoop().execute(new Runnable() {
@Override
public void run() {
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
});
}
});
}
- 可见,方法中为NioServerSocketChannel的ChannelPipeline添加了一个ChannelInitializer,并且,这个ChannelInitializer的initChannel方法实现,向ChannelPipeline中添加了一个非常重要的Handler:ServerBootstrapAcceptor并且,注意其构造函数的五个参数:
- ch:NioServerSocketChannel
- currentChildGroup:workerGroup
- currentChildHandler:childHandler(准备在客户端连接时,将这个childHandler添加到Client的Pipeline中,以便inbound/outbound时处理数据)
- currentChildOptions:childOptions(准备在客户端连接时,将这个childOptions设置到Client上——socket连接属性)
- currentChildAttrs:childAttrs(准备在客户端连接时,将这个childAttrs设置到Client上——用户自定义属性)
- 后三个属性做的事,正式bossGroup负责做的事情
接下来,看下ServerBootstrapAcceptor的关键的channelRead方法
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
final Channel child = (Channel) msg;
child.pipeline().addLast(childHandler);
setChannelOptions(child, childOptions, logger);
for (Entry<AttributeKey<?>, Object> e: childAttrs) {
child.attr((AttributeKey<Object>) e.getKey()).set(e.getValue());
}
try {
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
forceClose(child, future.cause());
}
}
});
} catch (Throwable t) {
forceClose(child, t);
}
}
- childGroup相当于currentChildGroup(workerGroup)
- child是NioSocketChannel,即客户端
- 方法主要完成了在服务端,向客户端NioSocketChannel的pipeline添加handler、设置NioSocketChannel的options、设置NioSocketChannel的attr,并且在childGroup的register方法实现:将workerGroup中的某个EventLoop和NioSocketChannel进行关联:
public abstract class MultithreadEventLoopGroup extends MultithreadEventExecutorGroup implements EventLoopGroup {
@Override
public ChannelFuture register(Channel channel) {
return next().register(channel);
}
}
- 深入代码看,其实是在AbstractChannel的AbstractUnsafe的register方法中,将EventLoop关联到AbstractChannel的eventLoop成员变量上,并且!!!,将接下来流程的执行【转交】给这个EventLoop线程去执行(由bossGroup线程转移到workerGroup线程):(仅保留关键代码)
public abstract class AbstractChannel extends DefaultAttributeMap implements Channel {
protected abstract class AbstractUnsafe implements Unsafe {
@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
//将EventLoop关键到AbstractChannel的eventLoop成员变量上
AbstractChannel.this.eventLoop = eventLoop;
if (eventLoop.inEventLoop()) {
register0(promise);
} else {
try {
//if判断当前执行的线程不是EventLoop线程,则转交接下来流程执行
eventLoop.execute(new Runnable() {
@Override
public void run() {
register0(promise);
}
});
} catch (Throwable t) {
...
}
}
}
}
}
PS:当调用AbstractChannel$AbstractUnsafe.register方法后,就完成了Channel和EventLoop的关联
接下来,研究一下:ServerBootstrapAcceptor的channelRead方法是怎么被调用的??
答:服务端启动时,会在NioEventLoop中开启一个无限死循环(NioEventLoop中的run方法),等待客户端的连接,这个循环没有任何事件到来也不会一直阻塞,也就是【服务端启动-->通过调用NioEventLoop的run方法】等待客户端连接
***这部分只是简要贯穿流程,至于无限死循环的详细内容,在本文2.3.4小节介绍***
当一个client连接到服务端时,通过NioEventLoop的private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) 方法开始处理接收到的事件,并且根据事件类型的不同,调用不同逻辑分支(仅保留关键代码):
public final class NioEventLoop extends SingleThreadEventLoop {
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
try {
int readyOps = k.readyOps();
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
unsafe.finishConnect();
}
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
ch.unsafe().forceFlush();
}
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
unsafe.read();
}
} catch (CancelledKeyException ignored) {
unsafe.close(unsafe.voidPromise());
}
}
}
- 因为这里是客户端连接事件,所以走第3个if分支,即:OP_READ与OP_ACCEPT走一个分支逻辑
- 调用的是AbstractNioMessageChannel.NioMessageUnsafe的read方法(仅保留关键代码):
private final class NioMessageUnsafe extends AbstractNioUnsafe {
private final List<Object> readBuf = new ArrayList<Object>();
@Override
public void read() {
int localRead = doReadMessages(readBuf);
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
pipeline.fireChannelRead(readBuf.get(i));
}
}
}
- 进而调用NioServerSocketChannel的doReadMessages方法(doReadMessages是NioMessageUnsafe的外部类AbstractNioMessageChannel中抽象方法,服务端实现是在NioServerSocketChannel中)(仅保留关键代码):
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
buf.add(new NioSocketChannel(this, ch));
return 1;
}
- 首先通过javaChannel()获取JDK的ServerSocketChannel对象
- 然后通过SocketUtils.accept获取客户端连接的JDK的SocketChannel对象
- 然后,实例化Netty的NioSocketChannel对象,参数为:NioServerSocketChannel(this)与JDK的客户端SocketChannel对象
- 其中NioSocketChannel的parent属性(其父类AbstractChannel的成员变量)就是其连接的NioServerSocketChannel
- 接下来,利用netty的pipeline机制,将读取事件逐级发送并传递到各个handler中(上面第二个代码块的for循环中 pipeline.fireChannelRead(readBuf.get(i))的调用),其中readBuf.get(i)取出来的就是客户端连接对象:NioSocketChannel
- 接着调用DefaultChannelPipeline的fireChannelRead方法,从head开始调用:
public class DefaultChannelPipeline implements ChannelPipeline {
@Override
public final ChannelPipeline fireChannelRead(Object msg) {
AbstractChannelHandlerContext.invokeChannelRead(head, msg);
return this;
}
}
- 于是就会触发ServerBootstrapAcceptor的channelRead方法
2.2 ServerBootstrap的创建及配置过程
1、说明
暂无
2、用户代码
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new StringDecoder());
pipeline.addLast(new StringEncoder());
pipeline.addLast(new ChatServerHandler());
}
})
.option(ChannelOption.SO_BACKLOG, 128)
.childOption(ChannelOption.SO_KEEPALIVE, true);
3、源码分析:
1)创建ServerBootstrap
待补充.....
2)设置两个group
待补充.....
3)设置NioServerSocketChannel.class
待补充.....
4)设置option
待补充.....
5)设置handler
待补充.....
2.3 客户端连接
1、说明:无
2、用户代码:
ChannelFuture f = b.bind(port).sync();
3、源码分析:
2.3.1 NioServerSocketChannel的创建时机
1、说明
与Client端类似,即在:
AbstractBoottrap->bind
AbstractBoottrap->doBind->initAndRegister
2.3.2 NioServerSocketChannel的初始化过程
1、说明:无
2、用户代码:无
3、源码分析:
由于我们有客户端代码分析的经验,知道NioServerSocketChannel对象是在initAndRegister中通过反射初始化的,先列出NioServerSocketChannel的继承机构,然后直接看其构无参造函数:
public NioServerSocketChannel() {
this(newSocket(DEFAULT_SELECTOR_PROVIDER));
}
private static ServerSocketChannel newSocket(SelectorProvider provider) {
try {
return provider.openServerSocketChannel();
} catch (IOException e) {
throw new ChannelException(
"Failed to open a server socket.", e);
}
}
- 与客户端(NioSocketChannel)类似,构造函数都是调用newSocket方法打开一个Java NIO Socket,不过,客户端的newSocket方法调用的是SelectorProvider的openSocketChannel,而服务端的newSocket方法调用的是SelectorProvider的openServerSocketChannel,一个是Java的SocketChannel,一个是Java的ServerSocketChannel
然后,调用重载构造函数,代码:
public NioServerSocketChannel(ServerSocketChannel channel) {
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
- 调用父类构造函数,传入参数是SelectionKey.OP_ACCEPT,意味着在服务启动后需要监听客户端的连接请求,对比客户端,客户端传入的参数是:SelectionKey.OP_READ
- config属性被初始化,并传入jdk原生ServerSocket对象,以便在config中根据配置设置其对应属性(NIO编程中serverSocketChannel.socket()返回的也是ServerSocket对象)
接者调用父类AbstractNioMessageChannel构造器:没有特殊的
接者再调用父类AbstractNioChannel的构造器:
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
try {
ch.configureBlocking(false);
} catch (IOException e) {
try {
ch.close();
} catch (IOException e2) {
if (logger.isWarnEnabled()) {
logger.warn(
"Failed to close a partially initialized socket.", e2);
}
}
throw new ChannelException("Failed to enter non-blocking mode.", e);
}
}
- 设置jdk的ServerSocketChannel为非阻塞
- ch被赋予jdk原生的ServerSocketChannel对象(成员变量为SelectableChannel类型
- readInterestOp被赋予SelectionKey.OP_ACCEPT
接者再调用父类AbstractChannel的构造器:
protected AbstractChannel(Channel parent) {
this.parent = parent;
id = newId();
unsafe = newUnsafe();
pipeline = newChannelPipeline();
}
- 实例化一个Unsafe对象:
- 客户端Unsafe是NioSocketChannel中newUnsafe方法实现创建的AbstractNioUnsafe(NioSocketChannelUnsafe)实例
- 服务端Unsafe是AbstractNioMessageChannel中newUnsafe方法实现创建的AbstractNioUnsafe(NioMessageUnsafe)实例
- 实例化一个pipeline对象:创建的ChannelPipeline实现与客户端一样,即AbstractChannel类中newChannelPipeline方法实现创建的DefaultChannelPipeline实例
- parent被设置为默认值null
2.3.2-1 ChannelPipeline的初始化过程
与客户端一致,参加客户端博文的2.3.2-2
2.3.3 将服务端Channel注册到Selector上
与客户端一致,参加客户端博文的2.3.3
2.3.4 服务端Selector事件轮询
1、说明:无
2、用户代码:
ChannelFuture f = b.bind(port).sync();
3、源码分析:
AbstractBootstrap->bind
AbstractBootstrap->doBind
AbstractBootstrap->initAndRegister
MultithreadEventLoopGroup->register(Channel channel),这里的Channel是反射创建的NioServerSocketChannel
public abstract class MultithreadEventLoopGroup extends MultithreadEventExecutorGroup implements EventLoopGroup {
@Override
public ChannelFuture register(Channel channel) {
//在bossGroup中分配新的NioEventLoop并将NioEventLoop与NioServerSocketChannel关联
return next().register(channel);
}
}
- 其中next方法是在《netty 之 源码分析 之 客户端启动过程分析》中讲过的,通过指定的线程组(这里是bossGroup)的线程数量是奇数还是偶数,创建不同EventExecutorChooser实现中,按照“负载均衡”算法获取bossGroup中的哪一个NioEventLoop
SingleThreadEventLoop->register(ChannelPromise promise)
public abstract class SingleThreadEventLoop extends SingleThreadEventExecutor implements EventLoop {
@Override
public ChannelFuture register(Channel channel) {
return register(new DefaultChannelPromise(channel, this));//构造DefaultChannelPromise对象
}
@Override
public ChannelFuture register(final ChannelPromise promise) {
ObjectUtil.checkNotNull(promise, "promise");
promise.channel().unsafe().register(this, promise);
return promise;
}
}
AbstractChannel.AbstractUnsafe->register(EventLoop eventLoop, final ChannelPromise promise),代码如下:
@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
//去掉判断
AbstractChannel.this.eventLoop = eventLoop;
if (eventLoop.inEventLoop()) {
register0(promise);
} else {
try {
eventLoop.execute(new Runnable() {
@Override
public void run() {
register0(promise);
}
});
} catch (Throwable t) {
closeForcibly();
closeFuture.setClosed();
safeSetFailure(promise, t);
}
}
}
- 将eventLoop绑定到AbstractChannel的eventLoop上
- boolean inEventLoop = inEventLoop();是判断当前调用execute方法的线程是否就是当前的NioEventLoop线程,判断逻辑是在SingleThreadEventExecutor实现的,判断结果是否,当前调用线程是main,this.thread是分配的NioEventLoop线程(SingleThreadEventExecutor中的thread):
@Override
public boolean inEventLoop(Thread thread) {//参数thread是上层传入的Thread.currentThread()
return thread == this.thread;
}
- 继承关系如下图:
如果为否,则前文分析过,需要将接下来的执行流程(register0(promise)方法的执行)【转移】给NioEventLoop线程(main线程转移给bossGroup线程),方式是通过:eventLoop.execute(new Runnable() {XXX);execute方法如下:execute方法是在JDK的Executor接口中定义,实现是在SingleThreadEventExecutor中:
public abstract class SingleThreadEventExecutor extends AbstractScheduledEventExecutor implements OrderedEventExecutor {
@Override
public void execute(Runnable task) {
//省略空判断
boolean inEventLoop = inEventLoop();
if (inEventLoop) {
addTask(task);
} else {
startThread();
addTask(task);
//省略删除任务逻辑
}
//省略判断逻辑
}
}
- 首先判断调用线程是否就是NioEventLoop线程,同样,判断结果为否(调用线程为main、this.thread为分配的NioEventLoop线程(SingleThreadEventExecutor中的thread),则执行else分支,重点关注startThread方法,代码如下(省略了非重点代码):
public abstract class SingleThreadEventExecutor extends AbstractScheduledEventExecutor implements OrderedEventExecutor {
private void startThread() {
if (state == ST_NOT_STARTED) {
if (STATE_UPDATER.compareAndSet(this, ST_NOT_STARTED, ST_STARTED)) {
doStartThread();
}
}
}
private void doStartThread() {
executor.execute(new Runnable() {
@Override
public void run() {
SingleThreadEventExecutor.this.run();
}
});
}
}
- executor是:io.netty.util.concurrent.ThreadPerTaskExecutor实现,是在MultithreadEventExecutorGroup构造函数中new的:executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());创建时传入默认线程工厂
- execute执行代码如下:
public final class ThreadPerTaskExecutor implements Executor {
@Override
public void execute(Runnable command) {
threadFactory.newThread(command).start();
}
}
- 直接通过线程工厂实现的newThread方法创建线程(这里创建的线程,就是NioEventLoop内部封装的thread),并将Runnable传入,并且启动线程
- 启动线程执行外部传入的runnable并执行,也就是执行:SingleThreadEventExecutor.this.run(),这个调用的本质是开启server端bossGroup线程的无限死循环,方法实现是在NioEventLoop中,代码如下(也就是用NioEventLoop线程执行下面run方法):
public final class NioEventLoop extends SingleThreadEventLoop {
@Override
protected void run() {
for (;;) {
try {
switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.SELECT:
select(wakenUp.getAndSet(false));
//省略select的唤醒逻辑
default:
}
cancelledKeys = 0;
needsToSelectAgain = false;
final int ioRatio = this.ioRatio;
if (ioRatio == 100) {
//省略异常处理
processSelectedKeys();
} else {
final long ioStartTime = System.nanoTime();
//省略异常处理
processSelectedKeys();
}
} catch (Throwable t) {
handleLoopException(t);
}
}
}
}
终于,跟到了NioEventLoop中的run方法,与本文前面部分,搜索【2.3.4】进行了呼应,代码解释如下:
- 上面的代码,主要是用一个死循环,不断轮询SelectionKey,select方法主要用来解决JDK空轮询的Bug(下一小结介绍)
- processSelectedKeys()方法就是针对不同的轮询事件进行处理
- 如果客户端有数据写入,最终也会调用AbstractNioMessageChannel的doReadMessages()方法
4、小结:总结下Selector的轮询流程
- Selector时间轮询是从SingleThreadEventExecutor的execute方法开始的
- 在上述方法中,Server端启动阶段,期间会交替执行线程(由main->bossGroup),会创建一个独立的线程(doStartThread方法的executor.execute会new一个JDK的Thread并启动,然后开启无限死循环)
- 在上述方法中,Server端启动阶段,交替线程之后,会将AbstractChannel的register0交给切换后的bossGroup进行执行
- 在上述方法中,Server端启动之后~~,需要NioEventLoop执行任务时,会通过Channel或者EventLoop,并调用EventLoop的execute方法(SingleThreadEventExecutor的execute方法)执行这个任务,过程中会首先将待执行的任务放入无锁串行执行任务队列,然后进行执行
- 无锁串行执行任务队列的【每个任务】实际上是在NioEventLoop的run方法中得到执行
- 在run方法中调用processSelectedKeys()处理轮询事件
- 用一句话总结: 当EventLoop的execute方法第一次被调用时,会触发startThread方法的调用,进而启动EventLoop所对应的java本地线程
2.3.5 Netty解决JDK空轮询Bug
1、说明:
1)空轮询说明:部分linux kernel2.6中,poll和epoll系统调用,对于突然中断的Socket连接会返回EventSet事件集合设置为POLLHUP或者POLLERR,EventSet事件集合发生了变化,这就可能导致Selector会被唤醒,会导致Selector空轮询,最终导致CPU使用率达到100%,官网称1.6的update18版本解决了该问题,但是1.7中依然存在,只是概率降低了。
2)出现此BUG原因说明:当Selector轮询结果为空时,没有进行wakeup、或者对新消息及时进行处理,导致发生了空轮询,CPU使用率达到了100%
3)netty解决空轮询的思路:当发现空轮询时(判断标准见源码分析部分),创建一个新的Selector,将可用事件重新注册到新的Selector中,来中止空轮询。
2、用户代码:无
3、源码分析:
先回顾一下:事件轮询的关键代码:
public final class NioEventLoop extends SingleThreadEventLoop {
@Override
protected void run() {
for (;;) {
try {
switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.SELECT:
select(wakenUp.getAndSet(false));
//省略select的唤醒逻辑
default:
}
//省略事件轮询处理逻辑
} catch (Throwable t) {
handleLoopException(t);
}
}
}
}
public final class NioEventLoop extends SingleThreadEventLoop {
private static final int SELECTOR_AUTO_REBUILD_THRESHOLD;
int selectorAutoRebuildThreshold = SystemPropertyUtil.getInt("io.netty.selectorAutoRebuildThreshold", 512);
SELECTOR_AUTO_REBUILD_THRESHOLD = selectorAutoRebuildThreshold;
private void select(boolean oldWakenUp) throws IOException {
Selector selector = this.selector;
long currentTimeNanos = System.nanoTime();
for (;;) {
//省略非关键代码
long timeoutMillis = (selectDeadLineNanos - currentTimeNanos + 500000L) / 1000000L;
int selectedKeys = selector.select(timeoutMillis);
selectCnt ++;
long time = System.nanoTime();
if (time - TimeUnit.MILLISECONDS.toNanos(timeoutMillis) >= currentTimeNanos) {
// timeoutMillis elapsed without anything selected.
selectCnt = 1;
} else if (SELECTOR_AUTO_REBUILD_THRESHOLD > 0 &&
selectCnt >= SELECTOR_AUTO_REBUILD_THRESHOLD) {
//省略日志打印代码
rebuildSelector();
selector = this.selector;
// Select again to populate selectedKeys.
selector.selectNow();
selectCnt = 1;
break;
}
currentTimeNanos = time;
}
}
}
- Selector每一次轮询都计数selectCnt++
- 开始轮询会将系统时间戳赋值给timeoutMillis
- 轮询完成后再将系统时间戳赋值给time
- 这两个时间会有一个时间差,而这个时间差,就是每次轮询所消耗的时间
- 从上面的逻辑可以看出:如果每次轮询消耗的时间为0s。且重复次数超过512次,则调用rebuildSelector方法,即重构Selector,代码如下:
public final class NioEventLoop extends SingleThreadEventLoop {
public void rebuildSelector() {
//省略判断代码
rebuildSelector0();
}
private void rebuildSelector0() {
final Selector oldSelector = selector;
final SelectorTuple newSelectorTuple;
//省略非关键代码
newSelectorTuple = openSelector();
// Register all channels to the new Selector.
int nChannels = 0;
for (SelectionKey key: oldSelector.keys()) {
key.cancel();
SelectionKey newKey = key.channel().register(newSelectorTuple.unwrappedSelector, interestOps, a);
//省略非关键代码
}
//省略非关键代码
selector = newSelectorTuple.selector;
oldSelector.close();
//省略非关键代码
}
}
- rebuildSelector0方法主要做了三件事
- 创建一个新的Selector
- 将旧的Selector中注册的事件全部取消
- 将可用事件,重新注册到新的Selector,并激活,这样,Netty就完美解决了JDK的空轮询BUG
2.3.6 Handler的添加过程
1、说明:服务端handler的添加过程和客户端的添加过程有点区别,服务端的handler有两个:
- 1个是通过用户代码调用ServerBootstrap的handler方法设置的handler:负责处理客户端连接接入请求数据
- 1个是通过用户代码调用ServerBootstrap的childHandler方法设置的childHander:负责处理与客户端的I/O交互
2、用户代码:无
3、源码分析:
1)handler
先看下ServerBootstrap的init方法,在初始化阶段,服务端Pipeline创建好之后,会向其双向链表添加:
【Head】---【init方法中new的ChannelInitializer】---【Tail】,这个时候,Pipeline中的Handler情况如下:
接下来,执行new的ChannelInitializer的initChannel方法时(将Channel绑定到EventLoop(这里是NioServerSocketChannel绑定到bossGroup)后,会在Pipeline中触发fireChannelRegistered事件),会向Server的Pipeline中添加用户自定义Handler、ServerBootstrapAcceptor,执行后Pipeline的内容如下图所示:
2)childHandler
在之前分析ServerBootstrapAcceptor的channelRead方法,会为新建的Channel设置Handler并注册到一个EventLoop中
- 而上图中addLast(childHandler),childHandler就是服务器启动代码中通过ServerBootstrap.childHandler()设置的handler(下图new ChannelInitiatizer)(用户代码):
.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new StringDecoder());
pipeline.addLast(new StringEncoder());
pipeline.addLast(new ChatServerHandler());
}
})
- 后续步骤就是服务端中,在客户端连接的Channel注册后,就会触发ChannelInitiatizer的initChannel方法。
4、小节
总结一下服务端handler与childHandler的区别与联系:
- 在服务端NioServerSocketChannel对象的Pipeline中添加了自定义handler对象(用户代码.handler()设置)和ServerBootstrapAcceptor对象
- 当有新的客户端连接请求时,会调用ServerBootstrapAcceptor的channelRead方法创建对应客户端NioSocketChannel对象,并将childHanler对象(用户代码.childHandler()设置)添加到NioSocketChannel对应的Pipeline中(下图红一),同时,将此NioSocketChannel绑定到workerGroup中的某个NioEventLoop中(下图红二,的register过程)
- handler对象只在accept()阻塞阶段起作用,他主要处理客户端发送过来的连接请求
- childHandler对象在客户端建立连接后起作用,他负责客户端与服务端的I/O交互