不识Netty真面目,只缘未读此真经
2021-03-02 08:27
标签:idp trap and use max 构建 config 部分 adb Netty官网:https://netty.io/ Netty is an asynchronous event-driven network application framework for rapid development of maintainable high performance protocol servers & clients. Java技术栈方向的朋友应该或多或少都听说过Netty是对Java中nio ( 源码追踪中,我使用阿里的语雀产品的思维图记录主要方法调用,上面的图片是部分截图,完整原貌见: https://www.yuque.com/docs/share/02fa3e3d-d485-48e1-9cfe-6722a3ad8915 在初探Netty源码之前,至少需要理解Reactor Pattern、java.nio基本使用、Netty基本使用,这样后面才能把Netty的源码与java.nio对比着来看。 不识Netty真面目,只缘未读此真经。 http://gee.cs.oswego.edu/dl/cpjslides/nio.pdf Server端为每一个Client端的连接请求都开启一个独立线程,也就是所谓的BIO (Blocking IO),即 Reactor responds to IO events by dispatching the appropriate handler (Similar to AWT thread) Handlers perform non-blocking actions (Similar to AWT ActionListeners) Manage by binding handlers to events (Similar to AWT addActionListener) (1) 单线程版本 (2) 多线程版本 (3) 多Reactor版本 (一主多从、多主多从) 注意:以下Demo仅专注于主逻辑,没有处理异常,也没有关闭资源。 更多官方 https://github.com/netty/netty/tree/4.1/example/ 建议跟着我画的源码走向图,跟下面的内容,最好也开着 https://www.yuque.com/docs/share/02fa3e3d-d485-48e1-9cfe-6722a3ad8915 注意:追踪的是当前最新release的 本文出自 https://www.cnblogs.com/itwild/ 下面先重点看几个关键类的大致情况,方便我们读代码 。因为面向抽象编程,如果对常见类的继承层次一点不了解,读代码的过程会让人崩溃。你懂的!!! 类定义: io.netty.channel.nio.NioEventLoopGroup 类图: 类定义: io.netty.channel.nio.NioEventLoop 类图: 类定义: io.netty.channel.socket.nio.NioServerSocketChannel 类图: 类定义: io.netty.channel.socket.nio.NioSocketChannel 类图: 类定义: io.netty.channel.ChannelInitializer 类图: 类定义: io.netty.channel.ChannelInboundHandlerAdapter
* This implementation just forward the operation to the next {@link ChannelHandler} in the
* {@link ChannelPipeline}. Sub-classes may override a method implementation to change this.
*
* Be aware that messages are not released after the {@link #channelRead(ChannelHandlerContext, Object)}
* method returns automatically. If you are looking for a {@link ChannelInboundHandler} implementation that
* releases the received messages automatically, please see {@link SimpleChannelInboundHandler}.
* 类图: 类定义: io.netty.bootstrap.ServerBootstrap 类图: 类定义: io.netty.bootstrap.Bootstrap The {@link #bind()} methods are useful in combination with connectionless transports such as datagram (UDP).
* For regular TCP connections, please use the provided {@link #connect()} methods. 类图: 下面就正式开始追源码。 io.netty.channel.nio.NioEventLoopGroup 这里我们看到了熟悉的 往里面追几层,就到了 io.netty.util.concurrent.MultithreadEventExecutorGroup 注意: 创建 io.netty.channel.nio.NioEventLoopGroup io.netty.channel.nio.NioEventLoop io.netty.channel.nio.NioEventLoop 这里我们看到了 同时在创建 io.netty.channel.SingleThreadEventLoop 再往下; io.netty.util.concurrent.SingleThreadEventExecutor 这里我们看到了对 io.netty.bootstrap.AbstractBootstrap Server端的启动最核心的也就是上面加注释的三步。按照顺序先从 io.netty.channel.ReflectiveChannelFactory 还记得在前面的 io.netty.channel.socket.nio.NioServerSocketChannel io.netty.channel.nio.AbstractNioChannel io.netty.channel.AbstractChannel 这里看到创建了 io.netty.channel.DefaultChannelPipeline 此时 回到上面提到的重要的第2步: io.netty.bootstrap.ServerBootstrap 在 一旦 回到上面提到的重要的第3步: 通过分析类的继承层次(或者debug也行)可以跟踪调用到 io.netty.channel.SingleThreadEventLoop 再往下跟,最终调用的是 io.netty.channel.AbstractChannel 往下跟 io.netty.util.concurrent.SingleThreadEventExecutor io.netty.util.concurrent.SingleThreadEventExecutor 把 io.netty.util.concurrent.SingleThreadEventExecutor 继续追 io.netty.util.concurrent.ThreadPerTaskExecutor io.netty.channel.nio.NioEventLoop 先简单解释一下上面的代码,部分细节后面再扣。 讲到这里,正好刚才不是往 于是就要执行 io.netty.channel.AbstractChannel 上面我按照自己的理解,在代码中加了少许注释,下面按照我注释的顺序依次解释一下。 (1) io.netty.channel.nio.AbstractNioChannel 这里显然是把 (2) 作用:触发 io.netty.channel.DefaultChannelPipeline 上面的注释清晰地告诉我们,现在 io.netty.channel.DefaultChannelPipeline 这里需要先解释一下为什么又突然冒出来 io.netty.channel.DefaultChannelPipeline 看到上面的3行注释没有,就解释了上面的 回到正题, io.netty.channel.AbstractChannelHandlerContext 那么再回头看看 io.netty.channel.ChannelInitializer 原来,最终会触发 这里的 值得注意的是,此时 至于 来,继续。上面讲清楚了 (3) 作用:触发 还是往里面简单追一下源码。 io.netty.channel.AbstractChannelHandlerContext 到这里, 此时会poll到新加的task,见如下代码: io.netty.util.concurrent.SingleThreadEventExecutor 执行完这个新增的 此时, 到这里,Server端也就成功启动了。 与Server端完全一致。 入口与Server端一样,不一样的地方在于Client端是 Client端的就比较简单了,如下: io.netty.bootstrap.Bootstrap 前面的过程与Server端基本一样,执行完 前面分析过,这个过程会触发 注册成功后,会执行连接Server的回调。 io.netty.bootstrap.Bootstrap 需要看 io.netty.bootstrap.Bootstrap 最终调用的是: io.netty.channel.socket.nio.NioSocketChannel#doConnect() 再看 io.netty.util.internal.SocketUtils 这里我们看到了熟悉的 上面详细介绍了Server端的启动过程,Client端的启动过程,Client也向Server发出了连接请求。这时再回过头来看Server端。 Server端感知到了IO事件,会在io.netty.channel.nio.NioEventLoop的 io.netty.channel.nio.NioEventLoop
Non Blocking IO
)的封装,让我们能快速开发出性能更高、扩展性更好的网络应用程序。那么Netty究竟对nio做了怎样的封装呢?本文主要从源码角度揭开这层面纱。预备知识
Reactor Pattern
Doug Lea
(java.util.concurrent包的作者) 在《Scalable IO in Java》中循序渐进地分析了如何构建可伸缩的高性能IO服务以及服务模型的演变与进化。文中描述的Reactor Pattern
,也被Netty等大多数高性能IO服务框架所借鉴。因此仔细阅读《Scalable IO in Java》有助于更好地理解Netty框架的架构与设计。详情见:传统的服务模式
java.net.ServerSocket
包下api的使用。基于事件驱动模式
Reactor模式
Netty正是借鉴了这种多Reactor版本的设计。快速上手java.nio
Server端
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.ServerSocketChannel;
import java.nio.channels.SocketChannel;
import java.nio.channels.spi.SelectorProvider;
import java.util.Iterator;
public class NIOServer {
private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
public static void main(String[] args) throws IOException {
// ServerSocketChannel.open()
ServerSocketChannel serverSocketChannel = DEFAULT_SELECTOR_PROVIDER.openServerSocketChannel();
serverSocketChannel.configureBlocking(false);
serverSocketChannel.socket().bind(new InetSocketAddress(8080));
// Selector.open()
Selector selector = DEFAULT_SELECTOR_PROVIDER.openSelector();
// register this serverSocketChannel with the selector
serverSocketChannel.register(selector, SelectionKey.OP_ACCEPT);
// selector.select()
while (!Thread.interrupted()) {
selector.select();
Iterator
Client端
import java.io.IOException;
import java.net.InetSocketAddress;
import java.nio.ByteBuffer;
import java.nio.channels.SelectionKey;
import java.nio.channels.Selector;
import java.nio.channels.SocketChannel;
import java.nio.channels.spi.SelectorProvider;
import java.util.Iterator;
public class NIOClient {
private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
public static void main(String[] args) throws IOException {
// SocketChannel.open()
SocketChannel socketChannel = DEFAULT_SELECTOR_PROVIDER.openSocketChannel();
socketChannel.configureBlocking(false);
socketChannel.connect(new InetSocketAddress("127.0.0.1", 8080));
// Selector.open()
Selector selector = DEFAULT_SELECTOR_PROVIDER.openSelector();
// register this socketChannel with the selector
socketChannel.register(selector, SelectionKey.OP_CONNECT);
// selector.select()
while (!Thread.interrupted()) {
selector.select();
Iterator
快速上手Netty
example
,请参考:Server端
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.util.CharsetUtil;
public class NettyServer {
public static void main(String[] args) throws Exception {
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap()
.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_BACKLOG, 1024)
.childHandler(new ChannelInitializer
Client端
import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import io.netty.util.CharsetUtil;
public class NettyClient {
public static void main(String[] args) throws Exception {
EventLoopGroup group = new NioEventLoopGroup(1);
try {
Bootstrap bootstrap = new Bootstrap()
.group(group)
.channel(NioSocketChannel.class)
.handler(new ChannelInitializer
源码追踪
debug
模式,不理解的地方调试几遍。这里再次贴一下链接:4.1.58.Final
版本的源码。行无际的博客
:关键类
NioEventLoopGroup
/**
* {@link MultithreadEventLoopGroup} implementations which is used for NIO {@link Selector} based {@link Channel}s.
*/
public class NioEventLoopGroup extends MultithreadEventLoopGroup
NioEventLoop
/**
* {@link SingleThreadEventLoop} implementation which register the {@link Channel}‘s to a
* {@link Selector} and so does the multi-plexing of these in the event loop.
*
*/
public final class NioEventLoop extends SingleThreadEventLoop
NioServerSocketChannel
/**
* A {@link io.netty.channel.socket.ServerSocketChannel} implementation which uses
* NIO selector based implementation to accept new connections.
*/
public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel
NioSocketChannel
/**
* {@link io.netty.channel.socket.SocketChannel} which uses NIO selector based implementation.
*/
public class NioSocketChannel extends AbstractNioByteChannel
implements io.netty.channel.socket.SocketChannel
ChannelInitializer
/**
* A special {@link ChannelInboundHandler} which offers an easy way to initialize a {@link Channel} once it was
* registered to its {@link EventLoop}.
*
* Implementations are most often used in the context of {@link Bootstrap#handler(ChannelHandler)} ,
* {@link ServerBootstrap#handler(ChannelHandler)} and {@link ServerBootstrap#childHandler(ChannelHandler)} to
* setup the {@link ChannelPipeline} of a {@link Channel}.
*
*
*
* public class MyChannelInitializer extends {@link ChannelInitializer} {
* public void initChannel({@link Channel} channel) {
* channel.pipeline().addLast("myHandler", new MyHandler());
* }
* }
*
* {@link ServerBootstrap} bootstrap = ...;
* ...
* bootstrap.childHandler(new MyChannelInitializer());
* ...
*
* Be aware that this class is marked as {@link Sharable} and so the implementation must be safe to be re-used.
*
* @param ChannelInboundHandlerAdapter
/**
* Abstract base class for {@link ChannelInboundHandler} implementations which provide
* implementations of all of their methods.
*
*
ServerBootstrap
/**
* {@link Bootstrap} sub-class which allows easy bootstrap of {@link ServerChannel}
*
*/
public class ServerBootstrap extends AbstractBootstrap
Bootstrap
/**
* A {@link Bootstrap} that makes it easy to bootstrap a {@link Channel} to use
* for clients.
*
*
Server端启动过程
创建Selector
Selector
的创建起于这行代码EventLoopGroup bossGroup = new NioEventLoopGroup(1)
/**
* Create a new instance using the specified number of threads, {@link ThreadFactory} and the
* {@link SelectorProvider} which is returned by {@link SelectorProvider#provider()}.
*/
public NioEventLoopGroup(int nThreads) {
this(nThreads, (Executor) null);
}
public NioEventLoopGroup(int nThreads, Executor executor) {
this(nThreads, executor, SelectorProvider.provider());
}
SelectorProvider.provider()
,如果觉得陌生,建议回到上面快速上手java.nio的代码。NioEventLoopGroup
的父类 MultithreadEventExecutorGroup
。protected MultithreadEventExecutorGroup(int nThreads, Executor executor,
EventExecutorChooserFactory chooserFactory, Object... args) {
if (executor == null) {
executor = new ThreadPerTaskExecutor(newDefaultThreadFactory());
}
children = new EventExecutor[nThreads];
for (int i = 0; i
NioEventLoopGroup(int nThreads)
时的参数nThreads
就传到了上面代码中的children = new EventExecutor[nThreads]
。看newChild(executor, args)
做了什么。@Override
protected EventLoop newChild(Executor executor, Object... args) throws Exception {
EventLoopTaskQueueFactory queueFactory = args.length == 4 ? (EventLoopTaskQueueFactory) args[3] : null;
return new NioEventLoop(this, executor, (SelectorProvider) args[0],
((SelectStrategyFactory) args[1]).newSelectStrategy(), (RejectedExecutionHandler) args[2], queueFactory);
}
NioEventLoop(NioEventLoopGroup parent, Executor executor, SelectorProvider selectorProvider,
SelectStrategy strategy, RejectedExecutionHandler rejectedExecutionHandler,
EventLoopTaskQueueFactory queueFactory) {
super(parent, executor, false, newTaskQueue(queueFactory), newTaskQueue(queueFactory),
rejectedExecutionHandler);
this.provider = ObjectUtil.checkNotNull(selectorProvider, "selectorProvider");
this.selectStrategy = ObjectUtil.checkNotNull(strategy, "selectStrategy");
final SelectorTuple selectorTuple = openSelector();
this.selector = selectorTuple.selector;
this.unwrappedSelector = selectorTuple.unwrappedSelector;
}
Selector
的创建就发生在这行代码final SelectorTuple selectorTuple = openSelector();
进去看看。private SelectorTuple openSelector() {
final Selector unwrappedSelector;
try {
unwrappedSelector = provider.openSelector();
} catch (IOException e) {
throw new ChannelException("failed to open a new selector", e);
}
if (DISABLE_KEY_SET_OPTIMIZATION) {
return new SelectorTuple(unwrappedSelector);
}
// 省略其他代码...
return new SelectorTuple(unwrappedSelector,
new SelectedSelectionKeySetSelector(unwrappedSelector, selectedKeySet));
}
provider.openSelector()
,到这里,创建出来的Selector
就与 EventLoop
关联在一起了。NioEventLoop
时,看看super(parent, executor, false, newTaskQueue(queueFactory), ...)
在父类SingleThreadEventLoop
干了什么。protected SingleThreadEventLoop(EventLoopGroup parent, Executor executor,
boolean addTaskWakesUp, Queue
private final Queue
Queue
的赋值。创建ServerSocketChannel
AbstractBootstrap
中的initAndRegister()
方法是ServerSocketChannel
的创建入口。final ChannelFuture initAndRegister() {
Channel channel = null;
try {
// 1.创建ServerSocketChannel
channel = channelFactory.newChannel();
// 2.初始化ServerSocketChannel
init(channel);
} catch (Throwable t) {
}
// 3.将ServerSocketChannel注册到Selector上
ChannelFuture regFuture = config().group().register(channel);
return regFuture;
}
ServerSocketChannel
的创建讲起。ServerSocketChannel
的创建用了工厂模式+反射机制。具体见ReflectiveChannelFactory
/**
* A {@link ChannelFactory} that instantiates a new {@link Channel} by invoking its default constructor reflectively.
*/
public class ReflectiveChannelFactory
bootstrap.channel(NioServerSocketChannel.class)
这行代码吗?传入的Class
就是用于反射生成Channel
实例的。这里是Server端,显然需要进NioServerSocketChannel
看如何创建的。private static final SelectorProvider DEFAULT_SELECTOR_PROVIDER = SelectorProvider.provider();
private static ServerSocketChannel newSocket(SelectorProvider provider) {
try {
/**
* Use the {@link SelectorProvider} to open {@link SocketChannel} and so remove condition in
* {@link SelectorProvider#provider()} which is called by each ServerSocketChannel.open() otherwise.
*
* See #2308.
*/
return provider.openServerSocketChannel();
} catch (IOException e) {
throw new ChannelException(
"Failed to open a server socket.", e);
}
}
public NioServerSocketChannel() {
this(newSocket(DEFAULT_SELECTOR_PROVIDER));
}
public NioServerSocketChannel(ServerSocketChannel channel) {
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
provider.openServerSocketChannel()
这行代码也就创建出来了ServerSocketChannel
。再往父类里面追,看做了些什么。super(null, channel, SelectionKey.OP_ACCEPT);
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
ch.configureBlocking(false);
}
this.readInterestOp = readInterestOp
把感兴趣的操作赋值给readInterestOp
,上面传过来的是SelectionKey.OP_ACCEPT
。ch.configureBlocking(false)
把刚才创建出来的channel
设置为非阻塞。继续往父类追。protected AbstractChannel(Channel parent) {
this.parent = parent;
id = newId();
unsafe = newUnsafe();
pipeline = newChannelPipeline();
}
protected DefaultChannelPipeline newChannelPipeline() {
return new DefaultChannelPipeline(this);
}
ChannelPipeline
,并关联到Channel
上。再往下走一步。protected DefaultChannelPipeline(Channel channel) {
this.channel = ObjectUtil.checkNotNull(channel, "channel");
succeededFuture = new SucceededChannelFuture(channel, null);
voidPromise = new VoidChannelPromise(channel, true);
tail = new TailContext(this);
head = new HeadContext(this);
head.next = tail;
tail.prev = head;
}
ChannelPipeline
大致如下:head --> tail
初始化ServerSocketChannel
init(channel);
注意,实现类为ServerBootstrap
,因为是Server端嘛。@Override
void init(Channel channel) {
ChannelPipeline p = channel.pipeline();
final EventLoopGroup currentChildGroup = childGroup;
final ChannelHandler currentChildHandler = childHandler;
p.addLast(new ChannelInitializer
ChannelPipeline
加了一个ChannelHandler
。此时ChannelPipeline
大致如下:head --> ChannelInitializer --> tail
serverSocketChannel
注册到EventLoop
(或者说Selector
)上,便会触发这里initChannel
的调用。避免绕晕了,这里暂时不去探究具体的调用逻辑。后面调用到这里的时候,再回过头来仔细探究。ServerSocketChannel注册到Selector上
config().group().register(channel);
SingleThreadEventLoop
的register
方法。@Override
public ChannelFuture register(Channel channel) {
return register(new DefaultChannelPromise(channel, this));
}
@Override
public ChannelFuture register(final ChannelPromise promise) {
ObjectUtil.checkNotNull(promise, "promise");
promise.channel().unsafe().register(this, promise);
return promise;
}
AbstractChannel
的register
方法,如下:@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {
AbstractChannel.this.eventLoop = eventLoop;
eventLoop.execute(new Runnable() {
@Override
public void run() {
register0(promise);
}
});
}
eventLoop.execute()
private void execute(Runnable task, boolean immediate) {
addTask(task);
startThread();
}
addTask(task)
把上面的Runnable
放入到上面提到的Queue
,过程见如下代码: /**
* Add a task to the task queue, or throws a {@link RejectedExecutionException} if this instance was shutdown
* before.
*/
protected void addTask(Runnable task) {
ObjectUtil.checkNotNull(task, "task");
if (!offerTask(task)) {
reject(task);
}
}
final boolean offerTask(Runnable task) {
if (isShutdown()) {
reject();
}
return taskQueue.offer(task);
}
task
放入taskQueue
后,就到startThread()
这行代码了,进去瞧瞧。private void startThread() {
doStartThread();
}
private void doStartThread() {
executor.execute(new Runnable() {
@Override
public void run() {
SingleThreadEventExecutor.this.run();
}
});
}
executor.execute
,到这里才真正创建新的线程执行SingleThreadEventExecutor.this.run()
, thread名称大致为nioEventLoopGroup-2-1
,见如下代码:@Override
public void execute(Runnable command) {
threadFactory.newThread(command).start();
}
SingleThreadEventExecutor.this.run()
实际执行的代码如下:@Override
protected void run() {
int selectCnt = 0;
for (;;) {
try {
int strategy;
try {
strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
switch (strategy) {
case SelectStrategy.CONTINUE:
continue;
case SelectStrategy.BUSY_WAIT:
// fall-through to SELECT since the busy-wait is not supported with NIO
case SelectStrategy.SELECT:
long curDeadlineNanos = nextScheduledTaskDeadlineNanos();
if (curDeadlineNanos == -1L) {
curDeadlineNanos = NONE; // nothing on the calendar
}
nextWakeupNanos.set(curDeadlineNanos);
try {
if (!hasTasks()) {
strategy = select(curDeadlineNanos);
}
} finally {
// This update is just to help block unnecessary selector wakeups
// so use of lazySet is ok (no race condition)
nextWakeupNanos.lazySet(AWAKE);
}
// fall through
default:
}
} catch (IOException e) {
// If we receive an IOException here its because the Selector is messed up. Let‘s rebuild
// the selector and retry. https://github.com/netty/netty/issues/8566
// ...
continue;
}
selectCnt++;
cancelledKeys = 0;
needsToSelectAgain = false;
final int ioRatio = this.ioRatio;
boolean ranTasks;
if (ioRatio == 100) {
try {
if (strategy > 0) {
processSelectedKeys();
}
} finally {
// Ensure we always run tasks.
ranTasks = runAllTasks();
}
} else if (strategy > 0) {
final long ioStartTime = System.nanoTime();
try {
processSelectedKeys();
} finally {
// Ensure we always run tasks.
final long ioTime = System.nanoTime() - ioStartTime;
ranTasks = runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
}
} else {
ranTasks = runAllTasks(0); // This will run the minimum number of tasks
}
} finally {
// Always handle shutdown even if the loop processing threw an exception.
}
}
}
run()
方法里面是个死循环,大致是这样的,这里的描述并不完全准确,是这么个意思,taskQueue
里面如果有task,就不断poll
执行队列里的task,具体见runAllTasks()
;否则,就selector.select()
,若有IO事件,则通过processSelectedKeys()
来处理。taskQueue
里放了个Runnable
吗,再贴一下上面那个Runnable
的代码new Runnable() {
@Override
public void run() {
register0(promise);
}
};
Runnable
里面register0(promise)
了。private void register0(ChannelPromise promise) {
//(1)把ServerSocketChannel注册到了Selector上
doRegister();
// Ensure we call handlerAdded(...) before we actually notify the promise. This is needed as the
// user may already fire events through the pipeline in the ChannelFutureListener.
//(2)触发pipeline中的ChannelHandler的handlerAdded()方法调用
pipeline.invokeHandlerAddedIfNeeded();
safeSetSuccess(promise);
//(3)触发pipeline中的ChannelInboundHandler的channelRegistered()方法调用
pipeline.fireChannelRegistered();
// Only fire a channelActive if the channel has never been registered. This prevents firing
// multiple channel actives if the channel is deregistered and re-registered.
if (isActive()) {
if (firstRegistration) {
pipeline.fireChannelActive();
} else if (config().isAutoRead()) {
// This channel was registered before and autoRead() is set. This means we need to begin read
// again so that we process inbound data.
//
// See https://github.com/netty/netty/issues/4805
beginRead();
}
}
}
doRegister()
@Override
protected void doRegister() throws Exception {
boolean selected = false;
for (;;) {
selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0, this);
return;
}
}
ServerSocketChannel
注册到了Selector
上。pipeline.invokeHandlerAddedIfNeeded()
pipeline
中的ChannelHandler
的handlerAdded()
方法调用 final void invokeHandlerAddedIfNeeded() {
if (firstRegistration) {
firstRegistration = false;
// We are now registered to the EventLoop. It‘s time to call the callbacks for the ChannelHandlers,
// that were added before the registration was done.
callHandlerAddedForAllHandlers();
}
}
ServerSocketChannel
已经注册到EventLoop
上,是时候该调用Pipeline
中的ChannelHandlers
。到这里,就能与上面初始化ServerSocketChannel对接起来了,猜测应该会触发上面的ChannelInitializer
的调用。private void callHandlerAddedForAllHandlers() {
final PendingHandlerCallback pendingHandlerCallbackHead;
synchronized (this) {
pendingHandlerCallbackHead = this.pendingHandlerCallbackHead;
// Null out so it can be GC‘ed.
this.pendingHandlerCallbackHead = null;
}
// This must happen outside of the synchronized(...) block as otherwise handlerAdded(...) may be called while
// holding the lock and so produce a deadlock if handlerAdded(...) will try to add another handler from outside
// the EventLoop.
PendingHandlerCallback task = pendingHandlerCallbackHead;
while (task != null) {
task.execute();
task = task.next;
}
}
PendingHandlerCallback
。是这样的,在addLast(ChannelHandler... handlers)
时,实际上调了下面的方法。public final ChannelPipeline addLast(EventExecutorGroup group, String name, ChannelHandler handler) {
final AbstractChannelHandlerContext newCtx;
synchronized (this) {
newCtx = newContext(group, filterName(name, handler), handler);
addLast0(newCtx);
// If the registered is false it means that the channel was not registered on an eventLoop yet.
// In this case we add the context to the pipeline and add a task that will call
// ChannelHandler.handlerAdded(...) once the channel is registered.
if (!registered) {
newCtx.setAddPending();
callHandlerCallbackLater(newCtx, true);
return this;
}
EventExecutor executor = newCtx.executor();
if (!executor.inEventLoop()) {
callHandlerAddedInEventLoop(newCtx, executor);
return this;
}
}
callHandlerAdded0(newCtx);
return this;
}
PendingHandlerCallback
从哪里来的。翻译一下就是,在往Pipeline
中添加ChannelHandler
时,如果Channel
还没有注册到EventLoop
上,就将当前的AbstractChannelHandlerContext
封装到PendingHandlerCallback
里去,等着后面触发调用。PendingHandlerCallback.execute()
几经周折,会调用ChannelHandler
的handlerAdded()
,如下所示:final void callHandlerAdded() throws Exception {
// We must call setAddComplete before calling handlerAdded. Otherwise if the handlerAdded method generates
// any pipeline events ctx.handler() will miss them because the state will not allow it.
if (setAddComplete()) {
handler().handlerAdded(this);
}
}
ChannelInitializer
/**
* {@inheritDoc} If override this method ensure you call super!
*/
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isRegistered()) {
if (initChannel(ctx)) {
removeState(ctx);
}
}
}
private boolean initChannel(ChannelHandlerContext ctx) throws Exception {
if (initMap.add(ctx)) { // Guard against re-entrance.
initChannel((C) ctx.channel());
return true;
}
return false;
}
/**
* This method will be called once the {@link Channel} was registered. After the method returns this instance
* will be removed from the {@link ChannelPipeline} of the {@link Channel}.
*
* @param ch the {@link Channel} which was registered.
* @throws Exception is thrown if an error occurs. In that case it will be handled by
* {@link #exceptionCaught(ChannelHandlerContext, Throwable)} which will by default close
* the {@link Channel}.
*/
protected abstract void initChannel(C ch) throws Exception;
initChannel
调用,所以上面初始化ServerSocketChannel时重写的initChannel
会在这时执行。p.addLast(new ChannelInitializer
initChannel
执行之后,此时ChannelPipeline
大致如下:head --> tail
ServerBootstrapAcceptor
暂时并没有被放入ChannelPipeline
中,而同样是放到了上面提到的Queue
队列中,如下:ch.eventLoop().execute(new Runnable() {
@Override
public void run() {
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
});
ServerBootstrapAcceptor
里面干了啥,等到后面再细说。doRegister()
和pipeline.invokeHandlerAddedIfNeeded()
,接下来看pipeline.fireChannelRegistered()
。pipeline.fireChannelRegistered()
pipeline
中的ChannelInboundHandler
的channelRegistered()
方法调用static void invokeChannelRegistered(final AbstractChannelHandlerContext next) {
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRegistered();
} else {
executor.execute(new Runnable() {
@Override
public void run() {
next.invokeChannelRegistered();
}
});
}
}
private void invokeChannelRegistered() {
if (invokeHandler()) {
try {
// 这里触发了channelRegistered()方法调用
((ChannelInboundHandler) handler()).channelRegistered(this);
} catch (Throwable t) {
invokeExceptionCaught(t);
}
} else {
fireChannelRegistered();
}
}
register0()
这个task就执行完了。但是还记得这个task执行过程中,又往taskQueue
中添加了一个Runnable
吗?new Runnable() {
@Override
public void run() {
pipeline.addLast(new ServerBootstrapAcceptor(
ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs));
}
}
protected boolean runAllTasks(long timeoutNanos) {
for (;;) {
safeExecute(task);
task = pollTask();
if (task == null) {
lastExecutionTime = ScheduledFutureTask.nanoTime();
break;
}
}
afterRunningAllTasks();
this.lastExecutionTime = lastExecutionTime;
return true;
}
Runnable
后,此时ChannelPipeline
大致如下:head --> ServerBootstrapAcceptor --> tail
taskQueue
中的task都执行完了,EventLoop线程执行selector.select()
,等待客户端的连接。Client端启动过程
创建Selector
创建SocketChannel
bootstrap.channel(NioSocketChannel.class)
,所以需要看NioSocketChannel
的实现。这里也不必多说。初始化SocketChannel
@Override
void init(Channel channel) {
ChannelPipeline p = channel.pipeline();
p.addLast(config.handler());
}
SocketChannel注册到Selector上
doRegister()
,执行pipeline.invokeHandlerAddedIfNeeded()
时,没有Server端复杂(因为Server端初始化SocketChannel
,加了个添加ServerBootstrapAcceptor
到ChannelPipeline
的task)。initChannel
调用,所以这时会执行用户编写的ChannelInitializer
,也就是会执行ch.pipeline().addLast(new NettyClientHandler())
,将用户编写的NettyClientHandler
插入到ChannelPipeline
中。连接Server
private ChannelFuture doResolveAndConnect(final SocketAddress remoteAddress, final SocketAddress localAddress) {
final ChannelFuture regFuture = initAndRegister();
final Channel channel = regFuture.channel();
if (regFuture.isDone()) {
return doResolveAndConnect0(channel, remoteAddress, localAddress, channel.newPromise());
} else {
// Registration future is almost always fulfilled already, but just in case it‘s not.
final PendingRegistrationPromise promise = new PendingRegistrationPromise(channel);
regFuture.addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
// Directly obtain the cause and do a null check so we only need one volatile read in case of a
// failure.
Throwable cause = future.cause();
if (cause != null) {
// Registration on the EventLoop failed so fail the ChannelPromise directly to not cause an
// IllegalStateException once we try to access the EventLoop of the Channel.
promise.setFailure(cause);
} else {
// Registration was successful, so set the correct executor to use.
// See https://github.com/netty/netty/issues/2586
promise.registered();
doResolveAndConnect0(channel, remoteAddress, localAddress, promise);
}
}
});
return promise;
}
}
doResolveAndConnect0()
, 里面又调用的是doConnect()
private static void doConnect(
final SocketAddress remoteAddress, final SocketAddress localAddress, final ChannelPromise connectPromise) {
// This method is invoked before channelRegistered() is triggered. Give user handlers a chance to set up
// the pipeline in its channelRegistered() implementation.
final Channel channel = connectPromise.channel();
channel.eventLoop().execute(new Runnable() {
@Override
public void run() {
if (localAddress == null) {
channel.connect(remoteAddress, connectPromise);
} else {
channel.connect(remoteAddress, localAddress, connectPromise);
}
connectPromise.addListener(ChannelFutureListener.CLOSE_ON_FAILURE);
}
});
}
@Override
protected boolean doConnect(SocketAddress remoteAddress, SocketAddress localAddress) throws Exception {
boolean success = false;
try {
boolean connected = SocketUtils.connect(javaChannel(), remoteAddress);
if (!connected) {
selectionKey().interestOps(SelectionKey.OP_CONNECT);
}
success = true;
return connected;
} finally {
if (!success) {
doClose();
}
}
}
SocketUtils.connect(javaChannel(), remoteAddress)
public static boolean connect(final SocketChannel socketChannel, final SocketAddress remoteAddress)
throws IOException {
try {
return AccessController.doPrivileged(new PrivilegedExceptionAction
socketChannel.connect(remoteAddress)
。Server与Client通信
run()
方法里,调用processSelectedKeys()
,对于每个IO事件,最终调用的是processSelectedKey()
来处理。private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
try {
int readyOps = k.readyOps();
// We first need to call finishConnect() before try to trigger a read(...) or write(...) as otherwise
// the NIO JDK channel implementation may throw a NotYetConnectedException.
if ((readyOps & SelectionKey.OP_CONNECT) != 0) {
// remove OP_CONNECT as otherwise Selector.select(..) will always return without blocking
// See https://github.com/netty/netty/issues/924
int ops = k.interestOps();
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
unsafe.finishConnect();
}
// Process OP_WRITE first as we may be able to write some queued buffers and so free memory.
if ((readyOps & SelectionKey.OP_WRITE) != 0) {
// Call forceFlush which will also take care of clear the OP_WRITE once there is nothing left to write
ch.unsafe().forceFlush();
}
// Also check for readOps of 0 to workaround possible JDK bug which may otherwise lead
// to a spin loop
if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT
下一篇:koa中使用 ejs