Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -476,6 +476,15 @@ public class CommonParameter {
@Getter
@Setter
public int jsonRpcMaxBlockFilterNum = 50000;
@Getter
@Setter
public int jsonRpcMaxBatchSize = 100;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Validate non-negative range for the new size-limit fields at config load

The three new fields (jsonRpcMaxBatchSize, jsonRpcMaxResponseSize, jsonRpcMaxAddressSize) are read via Args.applyNodeConfig with no range validation. The > 0 guards in the call sites mean a negative value silently becomes a permanent 'no limit' state — that is fine if <= 0 is the documented contract, but neither reference.conf nor config.conf says so explicitly, only > 0 otherwise no limit. Operators reading the comment may assume only 0 disables the limit; setting -1 (a common 'unset' sentinel) silently has the same effect, while Integer.MIN_VALUE is also accepted with no warning.

Suggestion: validate value >= 0 in Args.applyNodeConfig (reject startup with a clear error on negative values), and update the reference/config comments to spell out the exact 'disabled' semantics — e.g. # 0 disables the limit; negative values are rejected at startup.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment specifies it already, but I can optimze it as <=0 means no limit in config.conf.

@Getter
@Setter
public int jsonRpcMaxResponseSize = 25 * 1024 * 1024;
@Getter
@Setter
public int jsonRpcMaxAddressSize = 1000;

@Getter
@Setter
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -303,6 +303,9 @@ public void setHttpPBFTPort(int v) {
private int maxBlockRange = 5000;
private int maxSubTopics = 1000;
private int maxBlockFilterNum = 50000;
private int maxBatchSize = 100;
private int maxResponseSize = 25 * 1024 * 1024;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Use a memory-size config type for maxResponseSize

private int maxResponseSize = 25 * 1024 * 1024 is a byte-quantity field, but it is read as a raw int so the config file has to spell out 26214400 instead of a human-readable 25M / 25MiB. The project's config conventions call for getMemorySize() for size-class settings — keeping int here makes the value error-prone for operators (the inline comment // 25 MB = 25 * 1024 * 1024 B in config.conf is an early symptom). maxBatchSize and maxAddressSize are count-class and int is fine for them.

Suggestion: change maxResponseSize to a String field and parse it with getMemorySize(), so HOCON values like 25M work; keep the count-class fields as int.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using getMemorySize() increases the cognitive burden for users; using explicit integer values better conveys the intended meaning.

private int maxAddressSize = 1000;
}

@Getter
Expand Down
9 changes: 9 additions & 0 deletions common/src/main/resources/reference.conf
Original file line number Diff line number Diff line change
Expand Up @@ -402,6 +402,15 @@ node {

# Maximum number for blockFilter
maxBlockFilterNum = 50000

# Maximum number of requests in a JSON-RPC batch, >0 otherwise no limit
maxBatchSize = 100

# Maximum response body size in bytes for JSON-RPC (default 25MB), >0 otherwise no limit
maxResponseSize = 26214400

# Maximum number of addresses in a single JSON-RPC request, >0 otherwise no limit
maxAddressSize = 1000
}

# Disabled API list (works for http, rpc and pbft, not jsonrpc). Case insensitive.
Expand Down
1 change: 1 addition & 0 deletions framework/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ dependencies {
}

testImplementation group: 'org.springframework', name: 'spring-test', version: "${springVersion}"
testImplementation group: 'javax.portlet', name: 'portlet-api', version: '3.0.1'
implementation group: 'org.zeromq', name: 'jeromq', version: '0.5.3'
api project(":chainbase")
api project(":protocol")
Expand Down
3 changes: 3 additions & 0 deletions framework/src/main/java/org/tron/core/config/args/Args.java
Original file line number Diff line number Diff line change
Expand Up @@ -585,6 +585,9 @@ private static void applyNodeConfig(NodeConfig nc) {
PARAMETER.jsonRpcMaxBlockRange = jsonrpc.getMaxBlockRange();
PARAMETER.jsonRpcMaxSubTopics = jsonrpc.getMaxSubTopics();
PARAMETER.jsonRpcMaxBlockFilterNum = jsonrpc.getMaxBlockFilterNum();
PARAMETER.jsonRpcMaxBatchSize = jsonrpc.getMaxBatchSize();
PARAMETER.jsonRpcMaxResponseSize = jsonrpc.getMaxResponseSize();
PARAMETER.jsonRpcMaxAddressSize = jsonrpc.getMaxAddressSize();

// ---- P2P sub-bean ----
PARAMETER.nodeP2pVersion = nc.getP2p().getVersion();
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
package org.tron.core.services.filter;

import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.PrintWriter;
import javax.servlet.ServletOutputStream;
import javax.servlet.WriteListener;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpServletResponseWrapper;
import lombok.Getter;

/**
* Buffers the response body without writing to the underlying response,
* so the caller can replay it after the handler returns.
*
* <p>If {@code maxBytes > 0} and the response would exceed that limit, the
* {@link #isOverflow()} flag is set instead of throwing. The caller should check this flag after
* the handler returns and write its own error response when true.
*
* <p>Header-mutating methods ({@code setStatus}, {@code setContentType}) are buffered here and
* only forwarded to the real response via {@link #commitToResponse()}.
*/
public class BufferedResponseWrapper extends HttpServletResponseWrapper {

private final HttpServletResponse actual;
private final ByteArrayOutputStream buffer = new ByteArrayOutputStream();
private final int maxBytes;
private int status = HttpServletResponse.SC_OK;
private String contentType;
private boolean committed = false;
@Getter
private volatile boolean overflow = false;

private final ServletOutputStream outputStream = new ServletOutputStream() {
@Override
public void write(int b) {
if (overflow) {
return;
}
if (maxBytes > 0 && buffer.size() >= maxBytes) {
markOverflow();
return;
}
buffer.write(b);
}

@Override
public void write(byte[] b, int off, int len) {
if (overflow) {
return;
}
if (maxBytes > 0 && buffer.size() + len > maxBytes) {
markOverflow();
return;
}
buffer.write(b, off, len);
}

@Override
public boolean isReady() {
return true;
}

@Override
public void setWriteListener(WriteListener writeListener) {
}
};

private final PrintWriter writer = new PrintWriter(outputStream, true);
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] BufferedResponseWrapper PrintWriter uses platform default charset

Suggestion:

private final PrintWriter writer =
    new PrintWriter(new OutputStreamWriter(outputStream, StandardCharsets.UTF_8), true);


/**
* @param response the wrapped response
* @param maxBytes max allowed response bytes; {@code 0} means no limit
*/
public BufferedResponseWrapper(HttpServletResponse response, int maxBytes) {
super(response);
this.actual = response;
this.maxBytes = maxBytes;
}

private void markOverflow() {
overflow = true;
buffer.reset();
}

/**
* Early-detection path: if the framework reports the full content length before writing any
* bytes, we can flag overflow without buffering anything.
*/
@Override
public void setContentLength(int len) {
if (maxBytes > 0 && len > maxBytes) {
markOverflow();
}
}

@Override
public void setContentLengthLong(long len) {
if (maxBytes > 0 && len > maxBytes) {
markOverflow();
}
}

@Override
public int getStatus() {
return this.status;
}

@Override
public void setStatus(int sc) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Override getStatus and intercept setHeader/addHeader for Content-Length

Header capture currently only covers setStatus, setContentType, setContentLength(int|long). Two gaps:

  1. getStatus() is not overridden. Inherited HttpServletResponseWrapper.getStatus() returns the underlying response's status (still SC_OK until commitToResponse runs). Any logging filter / metrics interceptor that reads status via the wrapper before commit will see a stale value.

  2. setHeader(name, value) / addHeader(name, value) pass through to the underlying response. jsonrpc4j currently uses setContentLength so this is latent — but any downstream filter or library upgrade that writes Content-Length via setHeader would commit a Content-Length to the actual response before the size check runs.

Suggestion: override getStatus() to return this.status; intercept setHeader / addHeader for Content-Length (case-insensitive) so they go through the same buffering / overflow check as setContentLength.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks four your review:

  • getStatus() is overridden;
  • Add addtional check of content-length for setHeader and addHeader. Really, it will be overrideen by actual.setContentLength(buffer.size()); so there is little neceesay.

this.status = sc;
}

@Override
public void setHeader(String name, String value) {
if ("content-length".equalsIgnoreCase(name)) {
try {
setContentLengthLong(Long.parseLong(value));
} catch (NumberFormatException ignored) {
// malformed value, skip overflow check
}
} else {
super.setHeader(name, value);
}
}

@Override
public void addHeader(String name, String value) {
if ("content-length".equalsIgnoreCase(name)) {
try {
setContentLengthLong(Long.parseLong(value));
} catch (NumberFormatException ignored) {
// malformed value, skip overflow check
}
} else {
super.addHeader(name, value);
}
}

@Override
public void setContentType(String type) {
this.contentType = type;
}

@Override
public ServletOutputStream getOutputStream() {
return outputStream;
}

@Override
public PrintWriter getWriter() {
return writer;
}

public void commitToResponse() throws IOException {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] Make commitToResponse idempotent or fail-fast on second call

After commitToResponse(), the wrapper still holds the buffered bytes; calling it a second time would write the same body twice. The current call site only commits once so there's no live bug, but the contract is implicit and a future refactor could trip on it.

Suggestion: either clear the buffer at the end of commitToResponse, or set a committed flag and throw IllegalStateException on a second call.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a varible boolean committed to specify whether it has been write though writing twice will never happen in jsonrpc.

if (committed) {
throw new IllegalStateException("commitToResponse() already called");
}
committed = true;
if (contentType != null) {
actual.setContentType(contentType);
}
actual.setStatus(status);
actual.setContentLength(buffer.size());
buffer.writeTo(actual.getOutputStream());
actual.getOutputStream().flush();
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
package org.tron.core.services.filter;

import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.InputStreamReader;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import javax.servlet.ReadListener;
import javax.servlet.ServletInputStream;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletRequestWrapper;

/**
* Wraps a request and replays a pre-read body from a byte array.
*/
public class CachedBodyRequestWrapper extends HttpServletRequestWrapper {

private enum BodyAccessor { NONE, STREAM, READER }

private final byte[] body;
private BodyAccessor accessor = BodyAccessor.NONE;

public CachedBodyRequestWrapper(HttpServletRequest request, byte[] body) {
super(request);
this.body = body;
}

@Override
public ServletInputStream getInputStream() {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getInputStream() and getReader() should be mutually exclusive per servlet spec

Servlet 3.1 spec (§ 5.4 / § 5.5) requires that once one of getInputStream() / getReader() has been called on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array on every call and allows arbitrary interleaving. jsonrpc4j only calls one today, so the divergence is latent — but any future filter that reads the body through the other accessor would silently double-read with no error, which is exactly the kind of bug the spec wants to prevent.

Suggestion: track which accessor was used first (boolean field) and throw IllegalStateException on the second.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At present, jsonrpc4j only invokes one of them; this is a potential issue rather than an existing bug. Adding the relevant checks is actually redundant, but i will try it.

if (accessor == BodyAccessor.READER) {
throw new IllegalStateException("getReader() has already been called on this request");
}
accessor = BodyAccessor.STREAM;
final ByteArrayInputStream bais = new ByteArrayInputStream(body);
return new ServletInputStream() {
@Override
public int read() {
return bais.read();
}

@Override
public int read(byte[] b, int off, int len) {
return bais.read(b, off, len);
}

@Override
public boolean isFinished() {
return bais.available() == 0;
}

@Override
public boolean isReady() {
return true;
}

@Override
public void setReadListener(ReadListener readListener) {
}
};
}

@Override
public BufferedReader getReader() {
if (accessor == BodyAccessor.STREAM) {
throw new IllegalStateException("getInputStream() has already been called on this request");
}
accessor = BodyAccessor.READER;
String encoding = getCharacterEncoding();
Charset charset = encoding != null ? Charset.forName(encoding) : StandardCharsets.UTF_8;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[NIT] CachedBodyRequestWrapper.getReader does not handle malformed charset
Charset.forName raises an unchecked IllegalCharsetNameException / UnsupportedCharsetException for malformed values. A client sending Content-Type: application/json; charset=foo! will crash inside getReader() instead of letting the upstream JSON parser produce a clean -32700. getCharacterEncoding() returns whatever the client put on the wire — untrusted input.

Suggestion:

Charset charset;
try {
  charset = encoding != null ? Charset.forName(encoding) : StandardCharsets.UTF_8;
} catch (IllegalCharsetNameException | UnsupportedCharsetException ex) {
  charset = StandardCharsets.UTF_8;
}

return new BufferedReader(new InputStreamReader(new ByteArrayInputStream(body), charset));
}
}
Loading
Loading