-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Proxy: Implement tun raw network interface inbound support for Linux #5464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
明明有文档站为什么要单独写个README.md |
|
Current README explains how the feature is intended to be used. |
…ault implementation
|
它那么长似乎有的地方不是面向开发而是像文档面向最终用户的? |
|
Yes, README included is more an explanation for a user how to utilise the feature. |
|
可以解释一下 ConnectionHandler 是做什么的吗 我好像没有看到有关它的逻辑 |
|
"ConnectionHandler" is an interface with only one function "HandleConnection". gvisor call stack function every time there is new connection (TCP SYN packet or UDP packet that doesn't belong to any stream), stack function map gvisor connection to simple golang net.Conn and pass it to the HandlerConnection. HandleConnection receive net.Conn and destination as input, and simply pass it to the app dispatcher as new connection with the destination. At this stage the net.Conn stream is no different to stream received by any other proxy implementation as result of connect to the proxy. Gvisor is the connection that bridges network packets and net.Conn streams the usage of this function is on stack_gvisor.go#89 |
|
我的意思是 没有其他抽象逻辑 为什么要声明一个接口 |
|
In case there could be other implementations of the stack, than gvisor. |
|
tun.LinuxTun 最后可以被正确调用Close()吗 |
|
That's a legit thought. Thank you for that. |
|
I reviewed the flow, and it matches other input implementations: there is no "close" that signal proxy input/output handler to finish, so after it is initialised it is never going to shutdown. There is no closing of tun device in wireguard implementation, or any socket cleanup in other implementations. |
这就是 Xray-core 需要的 TUN Linux 上其实 TPROXY 性能比 TUN 更好,所以如果能把 Windows TUN 一起实现就更好了, 其实为啥我迟迟不动工 Windows TUN, 最有希望的是 eBPF on Windows 不过还在测试版又要钱去实名签名,所以我决定两个都要,先整个 wintun.dll 吧 @Owersun |
|
Thank you for kind words. I did omit windows implementation for the same reason you have mentioned. It's complicated, it will require external wuntun.dll, and with all that complexity it barely is going to be used. tun really shines for forwarded traffic, which most of the time is router setup, 99% of which is a linux box. Windows tun implementation will be used by like 1% of enthusiasts and will add 80% of the code to implement. Really bad trade off. Although I made the code extendable enough that it can be added later if there are really going to be people asking. Just don't think initial version must have it. |
|
Windows TUN是无和有的区别 而Linux TUN vs Linux tproxy 是性能和配置是否麻烦的事( |
|
Sure, I'll have a look how other apps does that for Windows. |
not really, for example sing-box also has a tun interface and it's actively used on Windows in many GUI proxy clients. Having a cross-platform tun interface would be very nice for xray, because there will be no need for double setup of xray <--> sing-box or tun2proxy for system-wide tunneling on windows |
|
Sure thing. As I said - this is initial implementation I made as MVP (minimal viable product). I tried to do it as clean as possible, so that the idea is clear. And extendable at the same time. |
|
Thanks @Owersun for your great work! Currently, we need to spin off a separate tun2socks process like the following https://github.com/2dust/v2rayNG/blob/master/V2rayNG/app/src/main/java/com/v2ray/ang/service/Tun2SocksService.kt#L35 |
|
To make it work on Android it require literally 1 line change. |
|
Guys, I have some good news. I try to pass the Android VPN service fd to the core and it is working with initial testing on v2rayNG! This is an important step towards one-core-fits-all and it will simplify many use cases. My suggestion is that we accept the PR now and work on Windows and other improvements later. @RPRX @Fangliding |
|
它的效率能比hev高吗( |
|
I'm glad that it turned out to be easily usable, that was the whole idea. With Windows I honestly done with the implementation, but it just look horrible... With external wintun.dll required (all other apps wireguard-go/sing-tun/etc. do that the same way), with problems with ipv6, and with a lot internal memory allocation (although this doesn't affect speed much). |
效率是一码事 全 go stack 是另一码事 所以 v2rayNG 搞了可选的 tun (现在是俩选项 我准备加第三个) |
|
Android编译是后来加的 其他安卓客户端用的是 Linux arm64编译吧 不知道这样会不会有问题 |
如果能实现,v2rayNG 将移除 badvpn-tun2socks ,只保留 xray tun 和 hev tun 即可 |
|
在json写Windows的反斜杠会喜提 |
|
Yeah, about that... PS. It will take a day or two from me to understand what needs to be added to support ProcessName for routing. I already understand the idea, just need to find out now how to tie it to TUN. |
|
@Fangliding 你测一下 json 5 支持的单引号会不会特殊对待反斜杠吧,总之匹配绝对路径是有用的,需要实现一下 |
|
@Owersun FullCone 就是 #237 而已,Xray-core 的其它部分都支持,你可以参考一下 UDP worker 和 Tunnel inbound 的代码 至于测试,用 NatTypeTester 就行 |
|
没有 还是需要双写 |
|
|
脑子有点乱 反正可能有小问题 主要是Linux的可执行文件名字太自由了 可以在文档里强制交代需要写 C:/system/xxx 这样的正斜杠路径 这样就没问题了 |
|
我的意思就是检测到有斜杠就当绝对路径来匹配,否则就是文件名,如果支持 regex 了可以再参考下 routing 其它部分的语法 |
|
找不到比 斜杠+名字 更合适的糖了 Linux只禁止可执行文件使用斜杠或者 |
|
我查了下 |
|
相对路径没必要支持所以也不冲突 对了还有个文件夹匹配的问题,你看以斜杠结尾的当作文件夹比较方便还是 regex? |
|
总之支持文件夹比支持 regex 有用些,先实现个支持绝对路径和支持文件夹吧,以斜杠结尾的就看作是文件夹
|
|
那processName得改名process了 |
|
|
|
|
|
还有对于 TUN 应该有个直接放行 IP packet 的机制而无需走 Xray 的 outbound(当然对于 UDP 也不需要每次都查,就像路由),这样非 TCP/UDP/ICMP 包就不受影响,我感觉隔壁的实现应该是这样,以后的版本再做吧 |
|
I tried the local processName based routing with TUN and it is working as intended: the result is So with local processes at least the linux stuff is working as intended (which I suspected it will be, you were saying there is no source information, but actually there is, the TUN connection pass proper source info to the dispatcher). |
I think this needs to be managed on the OS level. Tun is OS network level interface, if anyone want traffic around it (flowing through and out some other network interface), he can configure it on the network level to avoid entering the tun device. |
|
@Owersun 非目标流量走 Xray outbound direct 的话有两个问题,一是解包并重新封装 IP packet 以及 inbound->outbound 的开销,二是 direct 出站目前只支持 TCP 和 UDP,我的意思是或许可以用 raw socket 把 IP packet 原样发出,不过这个不急 目前离发版只剩 FullCone 的问题了,实现“按来源二元组路由且五分钟不活跃超时”和“把返回包的源地址写入 UDP”即可
|
关于这一点,我的意思是以后可以做个高级功能来实现,两个 Xray 通信来传递进程名等信息 |
Yeah, I totally had the same idea yesterday, that to work on the ingress and egress, two Xray peers should be able to exchange the contexts, the client should pass the context to the server. But that sounds like a whole feature to add. |
I will have a look at what's needed, what you mentioned, next few days. |
|
话说 Xray outbound 的流量应该会回到 TUN inbound?目前是怎么处理的 |
|
很明显似乎并没有人注意到outbound可以手动绑定接口的功能 |
|
|
|
I did merge it to my /tun branch (not rebased). Do you want me to do the rebase? Or current state is good enough? |
Gvisor replicate ip network stack in both directions, it as convert raw packets into data stream (that then is passed into core for routing), as chop back the returning data to packets that are then sent from the tun device. It's kind of meant to work like this, otherwise it wouldn't work. |
|
Ok, I've investigated all the options around NAT and what you call FullCone (which I was calling One-To-One from my network times). The idea of this tun implementation was targeted to replace workarounds people already are using with Xray - apps like tun2socks and sing-tun. To pass network traffic directly into Xray other ways, than TPROXY. Not "surpass" TPROXY, just give the other way to do that, that people already are using, just with external apps. With similar result, just a little more native, robust and performant. The implementation does exactly that, it serves as Inbound on client side and eliminates several steps happening before, when external app is used. No app in front, no network app-to-app, no input that doesn't know about source of the traffic. Nothing of that, just pure direct traffic->connection into Xray routing. Efficient, performant, informative. This implementation does that very well. |
This is implementation of tun network L3 interface as input to the app.
There is README.md in the folder, explaining how the feature works.
Worth to mention on the implementation itself:
This is extremely oversimplified (not in functionality, rather intentionally without excessive complexity) implementation.
There is Linux support only (but the implementation allow to add support for other OS later, if needed). Most probable use case of this feature are router boxes, which mostly are linux based devices.
There are no internal app configuration ways to manage the interface, as network interface is OS level entity. The complication of double routing table or ip rules, or other ways this should be enhanced to work properly, should be managed by OS, to ensure proper integrity with network state of the system. This is explicit decision based on how many different things there are you could do with a network interface in Linux, and adding all of that to be configurable through the app is excessive.
No external additional libraries used, the whole ip stack is gvisor lib, already existing in the app. Tun interface itself is just a file in the system.
OS Level optimisations like GRO/GSO are intentionally disabled, as passing through traffic (forwarded through the interface) anyway is not subject for it. Implementing and always checking and accounting for possible GRO/GSO tables affect performance, not enhance it, in the configuration as a router device. There is very-very-very slim possible advantage for traffic originating from the router itself, which will gain like 0.1% real life performance, but will need like 80% more code to support it.
There were several tests done with different scenarios, all of them used VRAY-XTLS-Reality as uplink.
Normal browsing works just fine, TLS sniffing also works, no issues with that.
Ssh through the interface also worked without any issues, no delays, not lag.
Torrents work just fine.
In one case, test subject managed to run another IPSec (UDP) based VPN on top of that, connecting through Xray, through IPSec commercial VPN to different locations, and then using VoIP and video conferencing apps on top of that, joining several meetings. All that from the country where IPSec VPN is under restricted ban.
I honestly didn't come up with more cases I wanna try after that worked.
With my router based on mediatek mt7986a (banana-pi-r3) I was not able to find traffic top with my 100Mb uplink connection, services like speedtest always load it up to the top.
Although I expect the numbers will not be so extreme when many connections open and close. The cpu profile shows that cpu spikes on connection establishing (routing through the app, forward to uplink and so on), and then it has no problem for traffic flow through same running connection.
All in all, this is very similar implementation to any standalone tun-socks proxy there is, just without excessive complexity, and without necessary app-to-app connection in between, passing packets, converted to connection streams, from the network directly to the app core.