bugs + agent-focused Forgejo gaps + CI + docs
Some checks are pending
ci / check (push) Waiting to run

Bugs:
* Shell injection in `fj auth setup-git`: the hostname is now validated
  against a strict DNS pattern and `git config` is invoked directly
  (no `sh -c`). Added 4 unit tests covering shell metacharacters.
* Pager won't compile on Windows: the libc-based dup2 redirect lives
  behind `#[cfg(unix)]`. Non-Unix gets a no-op stub.

Agent-focused Forgejo API gaps:
* `fj issue edit-comment ID` / `delete-comment ID`. Fix a wrong comment
  after the fact (an agent's bread-and-butter).
* `fj search code "..." [-R owner/name]`. The most-requested missing
  search dimension for codebase exploration.
* `fj pr request-review N user1 user2`, `unrequest-review N user`.
  Distinct from `pr review` (your own approval/changes/comment).
* `fj repo watch / unwatch / star / unstar / starred`. Mark repos for
  monitoring.
* `fj milestone {list,view,create,edit,close,reopen,delete,assign}`
  with `assign N --milestone ID|none` to attach an issue/PR.

UX + stability:
* Global `--json-fields foo,bar` projection on top of any `--json`
  output, gh-style. Dotted-path support (`--json-fields owner.login`).
* 429 / Retry-After honored in the retry loop with a 30 s cap.
* Clap `suggestions` feature for typo'd subcommands.
* `fj auth token` and `auth status --show-token` refuse to write to a
  TTY by default (`--force` to override).

CI:
* `.forgejo/workflows/ci.yml` runs fmt/clippy/test/release-build on
  every push and PR, mirroring the local pre-push hook.

Docs:
* `SECURITY.md` with threat model and known sharp edges.
* `docs/gh-to-fj.md` full command-by-command mapping.
* `docs/faq.md` covering tokens, hosts, debug, scripting, plugins.

Tests: 60 → 75 passing (2 ignored: editor and env-mutating tests that
fight the cargo test harness on macOS).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Stephen Way 2026-05-13 14:52:28 -07:00
parent d87a30bb29
commit faaf522b05
No known key found for this signature in database
29 changed files with 1835 additions and 143 deletions

49
.forgejo/workflows/ci.yml Normal file
View file

@ -0,0 +1,49 @@
name: ci
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
check:
runs-on: docker
container:
image: rust:1.95-bookworm
steps:
- name: checkout
uses: actions/checkout@v4
- name: cache cargo
uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-
- name: install components
run: |
rustup component add rustfmt clippy
- name: cargo fmt
run: cargo fmt --all -- --check
- name: cargo clippy
run: cargo clippy --all-targets --all-features -- -D warnings
- name: cargo test
run: cargo test --all --locked --no-fail-fast
- name: cargo build --release
run: cargo build --release --locked
- name: smoke-check binary
run: |
./target/release/fj --version
./target/release/fj --help >/dev/null
./target/release/fj completion zsh >/dev/null

View file

@ -6,32 +6,71 @@ All notable changes will be recorded here. The format follows
## [Unreleased] ## [Unreleased]
### Added ### Added (agent-focused Forgejo gaps)
- `fj issue edit-comment` / `delete-comment`. Lets an agent (or you)
fix or remove a wrong comment after the fact.
- `fj search code "..."` (and `-R owner/name` to scope to one repo).
Powered by Forgejo's `/repos/search/code` endpoint.
- `fj pr request-review N user1 user2,user3` and
`fj pr unrequest-review N user1`. Distinct from `pr review`, which
submits your own review.
- `fj repo watch` / `unwatch` / `star` / `unstar` / `starred`.
- `fj milestone {list,view,create,edit,close,reopen,delete,assign}`.
Includes `fj milestone assign N --milestone ID|none` to attach an
issue or PR to a milestone.
### Added (UX + stability)
- `--json-fields field1,field2` global flag. gh-style projection on top
of any `--json` output, with dotted-path support
(`--json-fields owner.login,id`).
- 429 / Retry-After honored in the retry loop with a 30 s cap. Wiremock
test added.
- `did you mean` suggestions on typo'd subcommands via clap's
`suggestions` feature.
- `fj auth token` and `fj auth status --show-token` now refuse to write
to a TTY (use `--force` to override). Avoids accidental shoulder-
surfing or capture in shell history.
- `tokio::signal::ctrl_c()` race in `cli::run` so the pager guard drops - `tokio::signal::ctrl_c()` race in `cli::run` so the pager guard drops
cleanly on SIGINT. cleanly on SIGINT.
- 9 wiremock-backed HTTP client integration tests covering retry - 10 wiremock-backed HTTP client integration tests covering retry
behavior, header forwarding, pagination, and panic-free error paths. behavior (5xx, 429), header forwarding, pagination, and panic-free
error paths.
- `Client::for_base_url` test constructor pointing at an arbitrary URL. - `Client::for_base_url` test constructor pointing at an arbitrary URL.
- `.forgejo/workflows/ci.yml` runs the same gate as the pre-push hook
on every push and PR.
### Added (docs)
- `SECURITY.md` covering threat model, known sharp edges, and reporting.
- `docs/gh-to-fj.md` — complete command-by-command mapping.
- `docs/faq.md` — common questions about tokens, hosts, debug,
scripting, plugins.
### Changed ### Changed
- Trimmed dependencies (no more `indicatif`, `futures-util`, - Trimmed dependencies (no more `indicatif`, `futures-util`,
`is-terminal`, `textwrap`, `tempfile`). Dropped reqwest features we `is-terminal`, `textwrap`, `tempfile` in prod). Dropped reqwest
don't use (`stream`, `brotli`). Release profile now uses `lto = "fat"` features we don't use (`stream`, `brotli`). Release profile uses
and `panic = "abort"`. `lto = "fat"` and `panic = "abort"`.
- HTTP retry loop builds the request once and clones via - HTTP retry loop builds the request once and clones via
`reqwest::Request::try_clone` per attempt (was rebuilding the full `reqwest::Request::try_clone` per attempt.
RequestBuilder each time).
- Binary size: 5.94 MB → 4.15 MB stripped (-30%). - Binary size: 5.94 MB → 4.15 MB stripped (-30%).
### Fixed ### Fixed
- **Shell injection** in `fj auth setup-git`. The hostname now must
match a strict DNS pattern before being interpolated into the
credential-helper string, and we call `git config` directly with
separate args instead of going through `sh -c`.
- **Pager won't compile on Windows**. The libc-based `dup2` redirect
now lives behind `#[cfg(unix)]`; non-Unix gets a no-op stub that
returns `None` from `maybe_start`.
- Removed the unsafe `std::env::set_var("FJ_NO_PAGER")` from dispatch. - Removed the unsafe `std::env::set_var("FJ_NO_PAGER")` from dispatch.
The `--no-pager` flag is now threaded into `pager::maybe_start`. `--no-pager` is now threaded into `pager::maybe_start(force_disabled)`.
- Replaced the panicking `.expect("token contains invalid header chars")` - Replaced the panicking `.expect("token contains invalid header chars")`
in `auth_headers` with a typed error that tells the user how to in `auth_headers` with a typed error.
recover.
## 0.1.0 — 2026-05-13 ## 0.1.0 — 2026-05-13

20
Cargo.lock generated
View file

@ -426,6 +426,12 @@ dependencies = [
"windows-sys 0.61.2", "windows-sys 0.61.2",
] ]
[[package]]
name = "fastrand"
version = "2.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f1f227452a390804cdb637b74a86990f2a7d7ba4b7d5693aac9b4dd6defd8d6"
[[package]] [[package]]
name = "find-msvc-tools" name = "find-msvc-tools"
version = "0.1.9" version = "0.1.9"
@ -451,6 +457,7 @@ dependencies = [
"serde_json", "serde_json",
"supports-color 3.0.2", "supports-color 3.0.2",
"tabled", "tabled",
"tempfile",
"thiserror 2.0.18", "thiserror 2.0.18",
"tokio", "tokio",
"toml", "toml",
@ -1702,6 +1709,19 @@ dependencies = [
"syn 1.0.109", "syn 1.0.109",
] ]
[[package]]
name = "tempfile"
version = "3.27.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32497e9a4c7b38532efcdebeef879707aa9f794296a4f0244f6f69e9bc8574bd"
dependencies = [
"fastrand",
"getrandom 0.3.4",
"once_cell",
"rustix",
"windows-sys 0.61.2",
]
[[package]] [[package]]
name = "terminal_size" name = "terminal_size"
version = "0.4.4" version = "0.4.4"

View file

@ -16,7 +16,7 @@ path = "src/main.rs"
[dependencies] [dependencies]
anyhow = "1" anyhow = "1"
thiserror = "2" thiserror = "2"
clap = { version = "4.5", features = ["derive", "env", "wrap_help"] } clap = { version = "4.5", features = ["derive", "env", "wrap_help", "suggestions"] }
clap_complete = "4.5" clap_complete = "4.5"
clap_mangen = "0.2" clap_mangen = "0.2"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "fs", "process", "io-util", "io-std", "signal"] } tokio = { version = "1", features = ["rt-multi-thread", "macros", "fs", "process", "io-util", "io-std", "signal"] }
@ -36,6 +36,7 @@ libc = "0.2"
[dev-dependencies] [dev-dependencies]
wiremock = "0.6" wiremock = "0.6"
tempfile = "3"
tokio = { version = "1", features = ["rt-multi-thread", "macros", "test-util"] } tokio = { version = "1", features = ["rt-multi-thread", "macros", "test-util"] }
[profile.release] [profile.release]

69
SECURITY.md Normal file
View file

@ -0,0 +1,69 @@
# Security policy
## Reporting a vulnerability
Please report security issues to **stephen@rasterstate.com** rather than
opening a public issue. Encrypt with the GPG key below if the issue is
sensitive. Expect a response within 72 hours.
Include if possible:
- A clear description of the issue and impact.
- Steps to reproduce, with a minimal example.
- The fj version (`fj --version`) and Forgejo version
(`fj api /version`).
## Threat model
fj is a CLI that:
- Stores Forgejo API tokens in the OS keychain (macOS Keychain, Linux
Secret Service, Windows Credential Manager). Tokens are never written
to disk in `hosts.toml` or anywhere else under our control.
- Sends those tokens as bearer headers over HTTPS to the configured
hosts. The HTTP client uses rustls with the platform's trust store.
- Shells out to `git` for clone, fetch, and credential-helper setup.
- Opens `$EDITOR` for body inputs and `$FJ_PAGER` / `$PAGER` for long
output.
Out of scope:
- Vulnerabilities in Forgejo itself. Report those to the Forgejo project.
- Vulnerabilities in `cargo` or in dependent crates. Those should be
reported via `cargo audit` channels.
- Loss of tokens stored in the OS keychain via OS-level compromise.
## Known sharp edges
- `fj auth token` and `fj auth status --show-token` print plaintext
tokens. We refuse to write to a TTY by default (since this risks
capture in shell history). Pass `--force` to override, or pipe to a
consumer like a credential helper.
- `fj auth setup-git` registers a `credential.helper` that invokes
`fj auth token` at git-credential time. The hostname is validated
against a strict DNS-style pattern at setup, but the helper string
still passes through git's `!`-prefix shell evaluator at use time. If
you've configured fj to talk to a host that some other tool added an
attacker-controlled value for, audit your git config.
- `fj extension`'s plugin dispatch shells out to `fj-<name>` binaries
on PATH. Treat any executable named `fj-*` on PATH as trusted code.
## Token hygiene
- Rotate Forgejo tokens periodically. `fj auth refresh --token NEW`
replaces the stored value without losing your host config.
- Scope tokens narrowly when the API supports it. fj only needs the
scopes for the operations you actually run.
- If you suspect a token is compromised, revoke it on the Forgejo side
immediately, then `fj auth logout --host <host>` and `fj auth login
--host <host>` to re-pair.
## Versions
We aim to fix security issues in the latest `main` and one prior minor
release. There is no LTS branch yet.
## GPG key
(Not yet published. Email reports without encryption are acceptable for
now; ack within 72 hours.)

View file

@ -4,6 +4,9 @@
the HTTP funnel, pager + SIGINT, repo resolution, test strategy. the HTTP funnel, pager + SIGINT, repo resolution, test strategy.
- [`jq.md`](jq.md) — the `fj api --jq` syntax. Dot paths, brackets, - [`jq.md`](jq.md) — the `fj api --jq` syntax. Dot paths, brackets,
negative indices, pipes. negative indices, pipes.
- [`gh-to-fj.md`](gh-to-fj.md) — command-by-command mapping from gh.
- [`faq.md`](faq.md) — common questions about tokens, hosts, debug,
scripting, plugins.
- [`troubleshooting.md`](troubleshooting.md) — keychain prompts, hangs, - [`troubleshooting.md`](troubleshooting.md) — keychain prompts, hangs,
401 errors, `--debug`, pager opt-out, alias precedence, hook bypass. 401 errors, `--debug`, pager opt-out, alias precedence, hook bypass.

112
docs/faq.md Normal file
View file

@ -0,0 +1,112 @@
# FAQ
## How do I get a token?
In Forgejo: Settings → Applications → Generate New Token. Pick the
scopes you need (typically `repo`, `notification`, and optionally
`admin:public_key` if you'll manage SSH keys). `fj auth login` will
prompt you to paste it.
## Where are my tokens stored?
In your OS keychain (macOS Keychain, Linux Secret Service, Windows
Credential Manager) under the service name `fj`, keyed by hostname.
Never on disk in `hosts.toml`.
## How do I add a second host?
```sh
fj auth login --host other.example.com
fj auth switch other.example.com # make it the default
# or pin per-command:
fj --host other.example.com repo list
```
## What's the difference between `fj api --jq` and `fj --json-fields`?
- `fj api --jq` projects a single JSON document with dotted paths,
brackets, and pipes. See [`jq.md`](jq.md). It only works on
`fj api`'s output.
- `fj --json-fields foo,bar` is a global flag that applies to ANY
command's `--json` output. It keeps only those top-level (or dotted)
fields. This is the gh-style behavior.
## Why does `fj` keep re-prompting the macOS keychain after I rebuild?
The keychain identifies applications by binary hash. Every
`cargo build --release` produces a new hash, so macOS treats it as a
new application. Click "Always Allow" once and it'll stick until the
next rebuild. See `troubleshooting.md`.
## How do I make `fj` work in a non-interactive script?
- Set `FJ_TOKEN` env var to avoid `fj auth login` prompting.
- Set `FJ_HOST` to pin the host.
- Always pass `--body "..."` or `--body -` so `fj` never opens
`$EDITOR`.
- Pipe output through `--json` for parseable results.
- Set `FJ_NO_PAGER=1` (or pass `--no-pager`).
## Can `fj` work without a git remote?
Yes. Without `-R/--repo` fj falls back to `git remote -v`. With `-R`
the remote isn't consulted at all. So you can run `fj repo view foo/bar`
from anywhere, including outside a git repo.
## How do I delete the test repo I created?
```sh
fj repo delete stephen/fj-cli-test # asks you to type the slug
fj repo delete stephen/fj-cli-test -y # skip the confirmation
```
## What if Forgejo and `gh` flag names disagree?
We track `gh`. See [`gh-to-fj.md`](gh-to-fj.md) for the mapping. When
`gh` doesn't have an equivalent (e.g. `fj milestone`, `fj protect`,
`fj hook`, `fj search code`), we follow Forgejo's terminology.
## Why are some `fj-*` plugins not found?
Plugin dispatch only triggers when the first positional isn't a known
subcommand. See `KNOWN_SUBCOMMANDS` in `src/main.rs`. Renaming your
plugin if it collides is the workaround.
## Can I script `git push` to also push via `fj`?
`fj` doesn't push git refs — `git` does that. But `fj auth setup-git`
installs a credential helper so that `git push` to your Forgejo host
authenticates via your stored token, no `~/.netrc` needed.
## How do I see what `fj` is sending?
```sh
fj --debug repo list -R foo/bar
# or
FJ_DEBUG=1 fj repo list -R foo/bar
```
Stderr will show `→ GET https://… ?query` and `← 200 url` for every
request, with a body preview if applicable.
## How do I customize the pager?
```sh
FJ_PAGER='less -R' fj repo list # don't auto-quit on short output
FJ_PAGER=cat fj repo list # disable paging
FJ_NO_PAGER=1 fj repo list # same as above
fj --no-pager repo list # per-command opt out
```
## Will `fj` work on Windows?
Compilation works (the pager is a no-op there; everything else is
cross-platform). Not actively tested. Reports welcome.
## Where does `fj` store its config?
- macOS: `~/Library/Application Support/com.rasterstate.fj/`
- Linux: `$XDG_CONFIG_HOME/fj/` (`~/.config/fj/`)
- Windows: `%APPDATA%\rasterstate\fj\`
Files: `hosts.toml`, `aliases.toml`, `config.toml`.

140
docs/gh-to-fj.md Normal file
View file

@ -0,0 +1,140 @@
# `gh``fj` quick reference
fj's command surface tracks `gh` closely. This table maps the commands
you reach for most. Where the semantics diverge, the difference is
called out.
## Auth
| `gh` | `fj` | Notes |
| ------------------------ | --------------------------- | ---------------------------------------------------- |
| `gh auth login` | `fj auth login` | Multi-host; first run asks for the hostname. |
| `gh auth status` | `fj auth status` | Tokens stored in OS keychain, never on disk. |
| `gh auth logout` | `fj auth logout` | |
| `gh auth refresh` | `fj auth refresh` | Re-verifies (or replaces) the stored token. |
| `gh auth token` | `fj auth token` | Refuses to write to a TTY without `--force`. |
| `gh auth setup-git` | `fj auth setup-git` | Installs a credential helper. |
| `gh auth switch` | `fj auth switch` | Picks the default host for `--host`-less commands. |
## Repos
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh repo list` | `fj repo list` | |
| `gh repo view OWNER/NAME` | `fj repo view OWNER/NAME` | Auto-detects from `git remote` when omitted. |
| `gh repo clone OWNER/NAME` | `fj repo clone OWNER/NAME` | |
| `gh repo create NAME` | `fj repo create NAME` | `--clone` to clone after creating. |
| `gh repo fork` | `fj repo fork` | |
| `gh repo sync` | `fj repo sync` | Calls Forgejo's `merge-upstream` endpoint. |
| `gh repo edit` | `fj repo edit` | |
| `gh repo rename` | `fj repo rename` | |
| `gh repo archive` | `fj repo archive` | |
| `gh repo delete` | `fj repo delete` | Requires typing the slug to confirm. |
| (n/a) | `fj repo branches` | |
| (n/a) | `fj repo topics --set a,b,c` | |
| (n/a) | `fj repo mirror <source-url>` | Forgejo migrate/mirror endpoint. |
| `gh repo set-default` | (not yet) | Multi-remote default picking. |
## Issues
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh issue list` | `fj issue list` | |
| `gh issue view N` | `fj issue view N` | |
| `gh issue create` | `fj issue create` | Omit `--body` to open `$EDITOR`. |
| `gh issue edit N` | `fj issue edit N` | |
| `gh issue close N` | `fj issue close N` | |
| `gh issue reopen N` | `fj issue reopen N` | |
| `gh issue comment N --body B` | `fj issue comment N --body B` | New: also `fj issue edit-comment ID`. |
| `gh issue develop N` | `fj issue develop N` | Branches off the default branch. |
## Pull requests
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh pr list` | `fj pr list` | |
| `gh pr view N` | `fj pr view N` | `--comments` shows reviews + issue comments. |
| `gh pr create` | `fj pr create` | Head defaults to current branch. |
| `gh pr edit N` | `fj pr edit N` | |
| `gh pr diff N` | `fj pr diff N` | Raw unified diff. |
| `gh pr checks N` | `fj pr checks N` | |
| `gh pr ready N` | `fj pr ready N` | |
| `gh pr review N --approve` | `fj pr review N --event approve` | |
| `gh pr review N --request-changes`| `fj pr review N --event request-changes` | |
| `gh pr status` | `fj pr status` | |
| `gh pr checkout N` | `fj pr checkout N` | |
| `gh pr merge N` | `fj pr merge N` | `--style merge|rebase|rebase-merge|squash`. |
| `gh pr close N` | `fj pr close N` | |
| `gh pr reopen N` | `fj pr reopen N` | |
| (n/a in gh) | `fj pr request-review N USER USER` | New: assign specific reviewers. |
| (n/a in gh) | `fj pr unrequest-review N USER` | New: remove a reviewer request. |
## Releases
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh release list` | `fj release list` | |
| `gh release view TAG` | `fj release view TAG` | |
| `gh release create TAG` | `fj release create TAG` | `--asset path` repeatable. |
| `gh release upload TAG file` | `fj release upload TAG file` | |
| `gh release download TAG` | `fj release download TAG` | |
| `gh release delete TAG` | `fj release delete TAG` | |
## Labels & milestones
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh label list` | `fj label list` | |
| `gh label create NAME` | `fj label create NAME` | |
| `gh label edit NAME` | `fj label edit NAME` | |
| `gh label delete NAME` | `fj label delete NAME` | |
| (n/a) | `fj milestone list` | New: Forgejo milestones. |
| (n/a) | `fj milestone create TITLE` | |
| (n/a) | `fj milestone assign N --milestone ID` | Assign an issue/PR to a milestone. |
## Workflows & secrets
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh run list` | `fj run list` | Forgejo Actions. |
| `gh run view N` | `fj run view N` | |
| `gh run rerun N` | `fj run rerun N` | |
| `gh run cancel N` | `fj run cancel N` | |
| `gh secret list` | `fj secret list` | |
| `gh secret set NAME -b VAL` | `fj secret set NAME --value VAL` | Also `--from-file path` (or `-` for stdin). |
| `gh variable list` | `fj variable list` | |
## Search
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh search repos QUERY` | `fj search repos QUERY` | |
| `gh search issues QUERY` | `fj search issues QUERY` | |
| `gh search prs QUERY` | `fj search prs QUERY` | |
| `gh search code QUERY` | `fj search code QUERY` | New. Optional `-R owner/name` restricts scope. |
## Other groups
| `gh` | `fj` | Notes |
| --------------------------------- | --------------------------------- | ----------------------------------------------- |
| `gh browse` | `fj browse` | `fj browse src/main.rs` deep-links. |
| `gh status` | `fj status` | Notifications inbox. |
| `gh ssh-key list` | `fj ssh-key list` | |
| `gh gpg-key list` | `fj gpg-key list` | |
| `gh alias list` | `fj alias list` | |
| `gh config get KEY` | `fj config get KEY` | |
| `gh extension list` | `fj extension list` | |
| `gh api PATH` | `fj api PATH` | See `docs/jq.md` for `--jq` syntax. |
| (n/a) | `fj protect` | Branch protection rules. |
| (n/a) | `fj hook` | Webhooks. |
| (n/a) | `fj repo watch / unwatch` | Subscribe to repo notifications. |
| (n/a) | `fj repo star / unstar / starred` | |
## Global flags
| `gh` | `fj` |
| --------------------------------- | --------------------------------- |
| `gh ... --hostname` | `fj ... --host` (also `FJ_HOST`) |
| (n/a) | `fj ... --debug` (or `FJ_DEBUG`) |
| (gh uses `less` by default) | `fj ... --no-pager` / `FJ_NO_PAGER` / `FJ_PAGER` |
| `gh ... --json field,field` | `fj ... --json --json-fields field,field` (global flag, gh-style projection) |

View file

@ -198,3 +198,36 @@ pub async fn list_comments(
let path = format!("/api/v1/repos/{owner}/{name}/issues/{number}/comments"); let path = format!("/api/v1/repos/{owner}/{name}/issues/{number}/comments");
client.json(Method::GET, &path, &[], None::<&()>).await client.json(Method::GET, &path, &[], None::<&()>).await
} }
#[derive(Debug, Clone, Serialize)]
struct EditComment<'a> {
body: &'a str,
}
/// Edit an issue or PR comment by its id (NOT by the issue number).
/// Comment ids come from `list_comments` or from the URL of a comment
/// (e.g. `#issuecomment-7`).
pub async fn edit_comment(
client: &Client,
owner: &str,
name: &str,
comment_id: u64,
body: &str,
) -> Result<Comment> {
let path = format!("/api/v1/repos/{owner}/{name}/issues/comments/{comment_id}");
client
.json(Method::PATCH, &path, &[], Some(&EditComment { body }))
.await
}
pub async fn delete_comment(
client: &Client,
owner: &str,
name: &str,
comment_id: u64,
) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/issues/comments/{comment_id}");
let res = client.request(Method::DELETE, &path, &[], None).await?;
res.error_for_status()?;
Ok(())
}

105
src/api/milestone.rs Normal file
View file

@ -0,0 +1,105 @@
use anyhow::Result;
use chrono::{DateTime, Utc};
use reqwest::Method;
use serde::{Deserialize, Serialize};
use crate::client::Client;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Milestone {
pub id: u64,
pub title: String,
#[serde(default)]
pub description: String,
pub state: String,
#[serde(default)]
pub open_issues: u64,
#[serde(default)]
pub closed_issues: u64,
#[serde(default)]
pub due_on: Option<DateTime<Utc>>,
#[serde(default)]
pub created_at: Option<DateTime<Utc>>,
#[serde(default)]
pub closed_at: Option<DateTime<Utc>>,
}
pub async fn list(client: &Client, owner: &str, name: &str, state: &str) -> Result<Vec<Milestone>> {
let path = format!("/api/v1/repos/{owner}/{name}/milestones");
let q = vec![("state".into(), state.into())];
client.json(Method::GET, &path, &q, None::<&()>).await
}
pub async fn get(client: &Client, owner: &str, name: &str, id: u64) -> Result<Milestone> {
let path = format!("/api/v1/repos/{owner}/{name}/milestones/{id}");
client.json(Method::GET, &path, &[], None::<&()>).await
}
#[derive(Debug, Clone, Serialize)]
pub struct CreateMilestone<'a> {
pub title: &'a str,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<&'a str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub due_on: Option<&'a str>,
}
pub async fn create(
client: &Client,
owner: &str,
name: &str,
body: &CreateMilestone<'_>,
) -> Result<Milestone> {
let path = format!("/api/v1/repos/{owner}/{name}/milestones");
client.json(Method::POST, &path, &[], Some(body)).await
}
#[derive(Debug, Clone, Serialize, Default)]
pub struct EditMilestone<'a> {
#[serde(skip_serializing_if = "Option::is_none")]
pub title: Option<&'a str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub description: Option<&'a str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub state: Option<&'a str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub due_on: Option<&'a str>,
}
pub async fn edit(
client: &Client,
owner: &str,
name: &str,
id: u64,
body: &EditMilestone<'_>,
) -> Result<Milestone> {
let path = format!("/api/v1/repos/{owner}/{name}/milestones/{id}");
client.json(Method::PATCH, &path, &[], Some(body)).await
}
pub async fn delete(client: &Client, owner: &str, name: &str, id: u64) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/milestones/{id}");
let res = client.request(Method::DELETE, &path, &[], None).await?;
res.error_for_status()?;
Ok(())
}
/// Assign an existing issue/PR to a milestone (or pass `None` to detach).
pub async fn assign(
client: &Client,
owner: &str,
name: &str,
issue_number: u64,
milestone_id: Option<u64>,
) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/issues/{issue_number}");
let body = match milestone_id {
Some(id) => serde_json::json!({ "milestone": id }),
None => serde_json::json!({ "milestone": null }),
};
let res = client
.request(Method::PATCH, &path, &[], Some(&body))
.await?;
res.error_for_status()?;
Ok(())
}

View file

@ -1,6 +1,7 @@
pub mod hook; pub mod hook;
pub mod issue; pub mod issue;
pub mod label; pub mod label;
pub mod milestone;
pub mod notification; pub mod notification;
pub mod org; pub mod org;
pub mod protect; pub mod protect;

View file

@ -267,6 +267,53 @@ pub async fn submit_review(
.await .await
} }
#[derive(Debug, Clone, Serialize)]
struct ReviewerBody {
reviewers: Vec<String>,
}
/// Request reviews from specific users on a PR. Distinct from
/// `submit_review`, which is your own approval/changes/comment.
pub async fn request_reviewers(
client: &Client,
owner: &str,
name: &str,
number: u64,
reviewers: Vec<String>,
) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/pulls/{number}/requested_reviewers");
let res = client
.request(
Method::POST,
&path,
&[],
Some(&serde_json::to_value(ReviewerBody { reviewers })?),
)
.await?;
res.error_for_status()?;
Ok(())
}
pub async fn unrequest_reviewers(
client: &Client,
owner: &str,
name: &str,
number: u64,
reviewers: Vec<String>,
) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/pulls/{number}/requested_reviewers");
let res = client
.request(
Method::DELETE,
&path,
&[],
Some(&serde_json::to_value(ReviewerBody { reviewers })?),
)
.await?;
res.error_for_status()?;
Ok(())
}
pub async fn list_reviews( pub async fn list_reviews(
client: &Client, client: &Client,
owner: &str, owner: &str,

View file

@ -292,6 +292,66 @@ pub async fn migrate(client: &Client, body: &Migrate<'_>) -> Result<Repo> {
.await .await
} }
// Watch / star ───────────────────────────────────────────────────────────────
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
pub struct Subscription {
#[serde(default)]
pub subscribed: bool,
#[serde(default)]
pub ignored: bool,
#[serde(default)]
pub reason: Option<String>,
#[serde(default)]
pub url: String,
}
#[allow(dead_code)]
pub async fn get_subscription(client: &Client, owner: &str, name: &str) -> Result<Subscription> {
let path = format!("/api/v1/repos/{owner}/{name}/subscription");
client.json(Method::GET, &path, &[], None::<&()>).await
}
pub async fn watch(client: &Client, owner: &str, name: &str) -> Result<Subscription> {
let path = format!("/api/v1/repos/{owner}/{name}/subscription");
client
.json(
Method::PUT,
&path,
&[],
Some(&serde_json::json!({ "subscribed": true, "ignored": false })),
)
.await
}
pub async fn unwatch(client: &Client, owner: &str, name: &str) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/subscription");
let res = client.request(Method::DELETE, &path, &[], None).await?;
res.error_for_status()?;
Ok(())
}
pub async fn star(client: &Client, owner: &str, name: &str) -> Result<()> {
let path = format!("/api/v1/user/starred/{owner}/{name}");
let res = client.request(Method::PUT, &path, &[], None).await?;
res.error_for_status()?;
Ok(())
}
pub async fn unstar(client: &Client, owner: &str, name: &str) -> Result<()> {
let path = format!("/api/v1/user/starred/{owner}/{name}");
let res = client.request(Method::DELETE, &path, &[], None).await?;
res.error_for_status()?;
Ok(())
}
pub async fn list_starred(client: &Client, limit: u32) -> Result<Vec<Repo>> {
let q = vec![("limit".into(), limit.clamp(1, 50).to_string())];
client
.json(Method::GET, "/api/v1/user/starred", &q, None::<&()>)
.await
}
pub async fn mirror_sync(client: &Client, owner: &str, name: &str) -> Result<()> { pub async fn mirror_sync(client: &Client, owner: &str, name: &str) -> Result<()> {
let path = format!("/api/v1/repos/{owner}/{name}/mirror-sync"); let path = format!("/api/v1/repos/{owner}/{name}/mirror-sync");
let res = client.request(Method::POST, &path, &[], None).await?; let res = client.request(Method::POST, &path, &[], None).await?;

View file

@ -43,3 +43,62 @@ pub async fn users(client: &Client, query: &str, limit: u32) -> Result<Vec<super
let body: Envelope<super::user::User> = client.json(Method::GET, path, &q, None::<&()>).await?; let body: Envelope<super::user::User> = client.json(Method::GET, path, &q, None::<&()>).await?;
Ok(body.data) Ok(body.data)
} }
/// A single hit in a code search. Forgejo's search-code endpoint returns
/// matches grouped by file with `formatted_lines` containing a highlighted
/// snippet. We deserialize what we need and stay forward-compatible by
/// allowing unknown fields.
#[derive(Debug, Clone, serde::Serialize, Deserialize)]
pub struct CodeHit {
#[serde(default)]
pub repo_id: u64,
#[serde(default)]
pub html_url: String,
#[serde(default)]
pub path: String,
#[serde(default)]
pub formatted_lines: String,
#[serde(default)]
pub raw_lines: String,
#[serde(default)]
pub matches: u64,
#[serde(default)]
pub language: String,
}
#[derive(Debug, Deserialize)]
struct CodeSearchResponse {
#[serde(default)]
results: Vec<CodeHit>,
#[serde(default)]
#[allow(dead_code)]
total: u64,
}
/// Search code across all repos visible to the authenticated user.
pub async fn code(client: &Client, query: &str, limit: u32) -> Result<Vec<CodeHit>> {
let path = "/api/v1/repos/search/code";
let q = vec![
("q".into(), query.into()),
("limit".into(), limit.clamp(1, 50).to_string()),
];
let body: CodeSearchResponse = client.json(Method::GET, path, &q, None::<&()>).await?;
Ok(body.results)
}
/// Search code restricted to one repository.
pub async fn code_in_repo(
client: &Client,
owner: &str,
name: &str,
query: &str,
limit: u32,
) -> Result<Vec<CodeHit>> {
let path = format!("/api/v1/repos/{owner}/{name}/search/code");
let q = vec![
("q".into(), query.into()),
("limit".into(), limit.clamp(1, 50).to_string()),
];
let body: CodeSearchResponse = client.json(Method::GET, &path, &q, None::<&()>).await?;
Ok(body.results)
}

View file

@ -53,20 +53,28 @@ pub fn path() -> Result<std::path::PathBuf> {
} }
pub fn load() -> Result<AliasFile> { pub fn load() -> Result<AliasFile> {
let p = path()?; load_from(&path()?)
if !p.exists() {
return Ok(AliasFile::default());
}
let text = fs::read_to_string(&p).with_context(|| format!("reading {}", p.display()))?;
toml::from_str(&text).with_context(|| format!("parsing {}", p.display()))
} }
pub fn save(a: &AliasFile) -> Result<()> { pub fn save(a: &AliasFile) -> Result<()> {
let p = path()?; save_to(&path()?, a)
}
/// Load aliases from an explicit path. Returns the default (empty) file when
/// the path doesn't exist. Used by tests; production callers use `load`.
pub fn load_from(p: &std::path::Path) -> Result<AliasFile> {
if !p.exists() {
return Ok(AliasFile::default());
}
let text = fs::read_to_string(p).with_context(|| format!("reading {}", p.display()))?;
toml::from_str(&text).with_context(|| format!("parsing {}", p.display()))
}
pub fn save_to(p: &std::path::Path, a: &AliasFile) -> Result<()> {
if let Some(parent) = p.parent() { if let Some(parent) = p.parent() {
fs::create_dir_all(parent)?; fs::create_dir_all(parent)?;
} }
fs::write(&p, toml::to_string_pretty(a)?)?; fs::write(p, toml::to_string_pretty(a)?)?;
Ok(()) Ok(())
} }
@ -131,6 +139,7 @@ pub fn expand_argv(argv: Vec<String>) -> Vec<String> {
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;
use tempfile::TempDir;
#[test] #[test]
fn expand_no_alias_passthrough() { fn expand_no_alias_passthrough() {
@ -142,4 +151,34 @@ mod tests {
fn expand_empty_argv() { fn expand_empty_argv() {
assert_eq!(expand_argv(vec!["fj".into()]), vec!["fj".to_string()]); assert_eq!(expand_argv(vec!["fj".into()]), vec!["fj".to_string()]);
} }
#[test]
fn save_then_load_roundtrip() {
let dir = TempDir::new().unwrap();
let path = dir.path().join("aliases.toml");
let mut a = AliasFile::default();
a.aliases.insert("co".into(), "pr checkout".into());
a.aliases.insert("st".into(), "pr status".into());
save_to(&path, &a).unwrap();
let loaded = load_from(&path).unwrap();
assert_eq!(loaded.aliases.len(), 2);
assert_eq!(loaded.aliases.get("co"), Some(&"pr checkout".to_string()));
assert_eq!(loaded.aliases.get("st"), Some(&"pr status".to_string()));
}
#[test]
fn load_missing_file_is_empty() {
let dir = TempDir::new().unwrap();
let path = dir.path().join("does-not-exist.toml");
let loaded = load_from(&path).unwrap();
assert!(loaded.aliases.is_empty());
}
#[test]
fn save_creates_parent_dirs() {
let dir = TempDir::new().unwrap();
let nested = dir.path().join("a/b/c/aliases.toml");
save_to(&nested, &AliasFile::default()).unwrap();
assert!(nested.exists());
}
} }

View file

@ -38,6 +38,10 @@ pub enum AuthSub {
pub struct TokenArgs { pub struct TokenArgs {
#[arg(long)] #[arg(long)]
pub host: Option<String>, pub host: Option<String>,
/// Print the token even if stdout is a terminal (default refuses to, to
/// avoid accidental shoulder-surfing in shell history).
#[arg(long)]
pub force: bool,
} }
#[derive(Debug, Args)] #[derive(Debug, Args)]
@ -111,10 +115,16 @@ pub async fn run(cmd: AuthCmd) -> Result<()> {
} }
fn token(args: TokenArgs) -> Result<()> { fn token(args: TokenArgs) -> Result<()> {
use std::io::IsTerminal;
let hosts = Hosts::load()?; let hosts = Hosts::load()?;
let host = hosts.resolve_host(args.host.as_deref())?.to_string(); let host = hosts.resolve_host(args.host.as_deref())?.to_string();
let token = token_store::load_token(&host)? let token = token_store::load_token(&host)?
.ok_or_else(|| anyhow!("no token stored for {host}; run `fj auth login`"))?; .ok_or_else(|| anyhow!("no token stored for {host}; run `fj auth login`"))?;
if std::io::stdout().is_terminal() && !args.force {
return Err(anyhow!(
"refusing to print a plaintext token to a terminal; pipe to a file/command, or pass --force"
));
}
println!("{token}"); println!("{token}");
Ok(()) Ok(())
} }
@ -162,48 +172,91 @@ async fn refresh(args: RefreshArgs) -> Result<()> {
fn setup_git(args: SetupGitArgs) -> Result<()> { fn setup_git(args: SetupGitArgs) -> Result<()> {
let hosts = Hosts::load()?; let hosts = Hosts::load()?;
let host = hosts.resolve_host(args.host.as_deref())?.to_string(); let host = hosts.resolve_host(args.host.as_deref())?.to_string();
validate_hostname(&host)?;
let cfg = hosts let cfg = hosts
.hosts .hosts
.get(&host) .get(&host)
.cloned() .cloned()
.ok_or_else(|| anyhow!("host '{host}' not configured"))?; .ok_or_else(|| anyhow!("host '{host}' not configured"))?;
// Use git's credential.helper indirection: fj prints the token on demand. if let Some(u) = cfg.user.as_deref() {
// We register an `https://<host>` scoped helper that invokes validate_username(u)?;
// `fj auth git-credential` (a synthetic subcommand exposed via stdin). }
let helper = format!("!fj auth token --host {host} | sed 's/^/password=/'");
let cmd_set_helper = format!("git config --global credential.https://{host}.helper '{helper}'"); // The credential helper value is `!`-prefixed, which means git will run it
let cmd_set_user = match cfg.user.as_deref() { // through sh when the helper is invoked. We've validated the hostname is
Some(u) => format!("git config --global credential.https://{host}.username {u}"), // shell-safe above (DNS-style only, no metacharacters), so the
None => String::new(), // interpolation here is safe even inside that sh context.
}; let key = format!("credential.https://{host}.helper");
let value = format!("!fj auth token --host {host} | sed 's/^/password=/'");
let user_key = format!("credential.https://{host}.username");
let user_val = cfg.user.as_deref().unwrap_or("");
if args.dry_run { if args.dry_run {
println!("# Run these commands to install the fj git credential helper:"); println!("# Run these commands to install the fj git credential helper:");
println!("{cmd_set_helper}"); println!("git config --global {key} {value:?}");
if !cmd_set_user.is_empty() { if !user_val.is_empty() {
println!("{cmd_set_user}"); println!("git config --global {user_key} {user_val:?}");
} }
return Ok(()); return Ok(());
} }
// Apply. // Call `git config` directly with separate args — no shell layer between
let parts: Vec<String> = vec![cmd_set_helper.clone(), cmd_set_user.clone()] // us and git, so there's nothing to escape.
.into_iter() run_git_config(&key, &value)?;
.filter(|s| !s.is_empty()) if !user_val.is_empty() {
.collect(); run_git_config(&user_key, user_val)?;
for c in parts {
let status = std::process::Command::new("sh")
.arg("-c")
.arg(&c)
.status()?;
if !status.success() {
return Err(anyhow!("`{c}` failed with status {status}"));
}
} }
println!("✓ Installed git credential helper for {host}"); println!("✓ Installed git credential helper for {host}");
Ok(()) Ok(())
} }
fn run_git_config(key: &str, value: &str) -> Result<()> {
let status = std::process::Command::new("git")
.args(["config", "--global", key, value])
.status()
.context("running `git config`")?;
if !status.success() {
return Err(anyhow!(
"`git config --global {key} ...` failed with status {status}"
));
}
Ok(())
}
/// A hostname must look like a DNS name (optionally with a port). This is the
/// keystone of `setup-git`'s safety: if the host is shell-safe, the
/// credential-helper interpolation below is too. Reject anything else.
fn validate_hostname(host: &str) -> Result<()> {
if host.is_empty() || host.len() > 253 {
return Err(anyhow!("invalid hostname '{host}'"));
}
let ok = host
.chars()
.all(|c| matches!(c, 'a'..='z' | 'A'..='Z' | '0'..='9' | '.' | '-' | ':'));
if !ok {
return Err(anyhow!(
"hostname '{host}' contains characters that are not allowed in a DNS name"
));
}
Ok(())
}
fn validate_username(user: &str) -> Result<()> {
if user.is_empty() || user.len() > 64 {
return Err(anyhow!("invalid username '{user}'"));
}
let ok = user
.chars()
.all(|c| c.is_ascii_alphanumeric() || matches!(c, '-' | '_' | '.'));
if !ok {
return Err(anyhow!(
"username '{user}' contains characters that are not allowed in a Forgejo login"
));
}
Ok(())
}
async fn login(args: LoginArgs) -> Result<()> { async fn login(args: LoginArgs) -> Result<()> {
let hostname = match args.host.clone() { let hostname = match args.host.clone() {
Some(h) => h, Some(h) => h,
@ -314,7 +367,13 @@ async fn status(args: StatusArgs) -> Result<()> {
} }
); );
if args.show_token { if args.show_token {
if let Some(t) = token_store::load_token(name)? { use std::io::IsTerminal;
if std::io::stdout().is_terminal() {
println!(
" Token value: {}",
output::dim("(hidden; pipe to a file or use `fj auth token --force`)")
);
} else if let Some(t) = token_store::load_token(name)? {
println!(" Token value: {t}"); println!(" Token value: {t}");
} }
} }
@ -380,3 +439,38 @@ fn switch(args: SwitchArgs) -> Result<()> {
println!("✓ Default host is now {}", args.host); println!("✓ Default host is now {}", args.host);
Ok(()) Ok(())
} }
#[cfg(test)]
mod setup_git_tests {
use super::*;
#[test]
fn rejects_shell_metacharacters_in_hostname() {
assert!(validate_hostname("rasterhub.com; rm -rf /").is_err());
assert!(validate_hostname("foo`whoami`").is_err());
assert!(validate_hostname("foo $(whoami)").is_err());
assert!(validate_hostname("foo|bar").is_err());
assert!(validate_hostname("foo&bar").is_err());
assert!(validate_hostname("foo\"bar").is_err());
assert!(validate_hostname("foo'bar").is_err());
}
#[test]
fn accepts_normal_hostnames() {
assert!(validate_hostname("rasterhub.com").is_ok());
assert!(validate_hostname("git.example.org:2222").is_ok());
assert!(validate_hostname("forgejo-1.internal").is_ok());
}
#[test]
fn rejects_shell_metacharacters_in_username() {
assert!(validate_username("alice; rm").is_err());
assert!(validate_username("alice$user").is_err());
}
#[test]
fn accepts_normal_usernames() {
assert!(validate_username("alice").is_ok());
assert!(validate_username("alice.bob_123").is_ok());
}
}

View file

@ -114,3 +114,71 @@ pub fn prompt_line(label: &str) -> Result<String> {
stdin().read_line(&mut buf).context("reading line")?; stdin().read_line(&mut buf).context("reading line")?;
Ok(buf.trim().to_string()) Ok(buf.trim().to_string())
} }
#[cfg(test)]
mod tests {
use super::*;
// Mutates process-global env so it's serial-only and ignored by default.
// Run with `cargo test -- --ignored --test-threads=1`.
#[test]
#[ignore]
fn editor_command_uses_visual_first() {
let saved_visual = env::var("VISUAL").ok();
let saved_editor = env::var("EDITOR").ok();
// SAFETY: tests modifying env are intrinsically racy. The fj test suite
// doesn't run them in parallel against editor_command, and we restore
// on exit. Acceptable for a hermetic CI environment.
unsafe {
env::set_var("VISUAL", "code -w");
env::set_var("EDITOR", "nano");
}
assert_eq!(editor_command(), "code -w");
unsafe {
env::remove_var("VISUAL");
}
assert_eq!(editor_command(), "nano");
unsafe {
env::remove_var("EDITOR");
}
assert_eq!(editor_command(), "vi");
unsafe {
if let Some(v) = saved_visual {
env::set_var("VISUAL", v);
}
if let Some(v) = saved_editor {
env::set_var("EDITOR", v);
}
}
}
// Marked #[ignore] because spawning a real subprocess from the test
// harness occasionally hangs on macOS depending on how cargo wired up
// stdin/stdout. Run with `cargo test -- --ignored` to exercise it.
#[test]
#[ignore]
#[cfg(unix)]
fn edit_text_with_true_returns_initial() {
// `/usr/bin/true` is a no-op command that always exits 0. Using it as
// the "editor" means we get the initial contents back unchanged.
// We use the absolute path to avoid hangs when test runners have an
// odd PATH that resolves `true` to something else.
let saved = env::var("EDITOR").ok();
unsafe { env::set_var("EDITOR", "/usr/bin/true") };
let out = edit_text("FJ_TEST.md", "hello world\n").unwrap();
assert_eq!(out, "hello world");
unsafe {
match saved {
Some(v) => env::set_var("EDITOR", v),
None => env::remove_var("EDITOR"),
}
}
}
#[test]
fn read_body_passes_string_through() {
let r = read_body(Some("hello")).unwrap();
assert_eq!(r.as_deref(), Some("hello"));
assert!(read_body(None).unwrap().is_none());
}
}

View file

@ -33,6 +33,10 @@ pub enum IssueSub {
Reopen(NumberOnly), Reopen(NumberOnly),
/// Add a comment. /// Add a comment.
Comment(CommentArgs), Comment(CommentArgs),
/// Edit a comment by id (comment ids come from `issue view --comments`).
EditComment(EditCommentArgs),
/// Delete a comment by id.
DeleteComment(DeleteCommentArgs),
/// Create a branch tied to the issue. /// Create a branch tied to the issue.
Develop(DevelopArgs), Develop(DevelopArgs),
} }
@ -137,6 +141,26 @@ pub struct CommentArgs {
pub body: Option<String>, pub body: Option<String>,
} }
#[derive(Debug, Args)]
pub struct EditCommentArgs {
#[command(flatten)]
pub r: RepoFlag,
/// Comment id (NOT the issue number). Find it via `issue view --comments --json`.
pub comment_id: u64,
/// New body. Use `-` to read from stdin. Omit to open `$EDITOR`.
#[arg(short = 'b', long)]
pub body: Option<String>,
}
#[derive(Debug, Args)]
pub struct DeleteCommentArgs {
#[command(flatten)]
pub r: RepoFlag,
pub comment_id: u64,
#[arg(short = 'y', long)]
pub yes: bool,
}
#[derive(Debug, Args)] #[derive(Debug, Args)]
pub struct DevelopArgs { pub struct DevelopArgs {
#[command(flatten)] #[command(flatten)]
@ -162,10 +186,48 @@ pub async fn run(cmd: IssueCmd, host: Option<&str>) -> Result<()> {
IssueSub::Close(args) => set_state(args, host, "closed").await, IssueSub::Close(args) => set_state(args, host, "closed").await,
IssueSub::Reopen(args) => set_state(args, host, "open").await, IssueSub::Reopen(args) => set_state(args, host, "open").await,
IssueSub::Comment(args) => comment(args, host).await, IssueSub::Comment(args) => comment(args, host).await,
IssueSub::EditComment(args) => edit_comment(args, host).await,
IssueSub::DeleteComment(args) => delete_comment(args, host).await,
IssueSub::Develop(args) => develop(args, host).await, IssueSub::Develop(args) => develop(args, host).await,
} }
} }
async fn edit_comment(args: EditCommentArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let body = match args.body.as_deref() {
Some("-") => {
let mut buf = String::new();
std::io::stdin().read_to_string(&mut buf)?;
buf
}
Some(s) => s.to_string(),
None => editor::edit_text("COMMENT_BODY.md", "")?,
};
if body.trim().is_empty() {
return Err(anyhow!("comment body is empty"));
}
let c = api::issue::edit_comment(&ctx.client, &ctx.owner, &ctx.name, args.comment_id, &body)
.await?;
println!("✓ Updated comment #{} ({})", c.id, c.html_url);
Ok(())
}
async fn delete_comment(args: DeleteCommentArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
if !args.yes {
let ans = editor::prompt_line(&format!(
"Delete comment {}? Type the id to confirm",
args.comment_id
))?;
if ans != args.comment_id.to_string() {
return Err(anyhow!("aborted"));
}
}
api::issue::delete_comment(&ctx.client, &ctx.owner, &ctx.name, args.comment_id).await?;
println!("✓ Deleted comment {}", args.comment_id);
Ok(())
}
async fn list(args: ListArgs, host: Option<&str>) -> Result<()> { async fn list(args: ListArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?; let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
if args.web { if args.web {

274
src/cli/milestone.rs Normal file
View file

@ -0,0 +1,274 @@
//! `fj milestone` — Forgejo milestones (releases-by-target-date grouping).
use anyhow::Result;
use clap::{Args, Subcommand};
use crate::api;
use crate::api::milestone::{CreateMilestone, EditMilestone};
use crate::cli::context::{resolve_repo, RepoFlag};
use crate::output;
use super::editor;
#[derive(Debug, Args)]
pub struct MilestoneCmd {
#[command(subcommand)]
pub command: MilestoneSub,
}
#[derive(Debug, Subcommand)]
pub enum MilestoneSub {
/// List milestones in a repo.
List(ListArgs),
/// View a milestone by id.
View(IdArgs),
/// Create a milestone.
Create(CreateArgs),
/// Edit a milestone.
Edit(EditArgs),
/// Close a milestone.
Close(IdArgs),
/// Reopen a closed milestone.
Reopen(IdArgs),
/// Delete a milestone.
Delete(DeleteArgs),
/// Assign an issue or PR to a milestone (or detach with --milestone none).
Assign(AssignArgs),
}
#[derive(Debug, Args)]
pub struct ListArgs {
#[command(flatten)]
pub r: RepoFlag,
#[arg(short = 's', long, default_value = "open")]
pub state: String,
#[arg(long)]
pub json: bool,
}
#[derive(Debug, Args)]
pub struct IdArgs {
#[command(flatten)]
pub r: RepoFlag,
pub id: u64,
#[arg(long)]
pub json: bool,
}
#[derive(Debug, Args)]
pub struct CreateArgs {
#[command(flatten)]
pub r: RepoFlag,
pub title: String,
#[arg(short = 'b', long)]
pub body: Option<String>,
/// Due date in ISO-8601 (e.g. 2026-06-01T00:00:00Z).
#[arg(long)]
pub due_on: Option<String>,
}
#[derive(Debug, Args)]
pub struct EditArgs {
#[command(flatten)]
pub r: RepoFlag,
pub id: u64,
#[arg(short = 't', long)]
pub title: Option<String>,
#[arg(short = 'b', long)]
pub body: Option<String>,
#[arg(long)]
pub due_on: Option<String>,
#[arg(long)]
pub body_editor: bool,
}
#[derive(Debug, Args)]
pub struct DeleteArgs {
#[command(flatten)]
pub r: RepoFlag,
pub id: u64,
#[arg(short = 'y', long)]
pub yes: bool,
}
#[derive(Debug, Args)]
pub struct AssignArgs {
#[command(flatten)]
pub r: RepoFlag,
/// Issue or PR number.
pub issue: u64,
/// Milestone id, or `none` to detach.
#[arg(long)]
pub milestone: String,
}
pub async fn run(cmd: MilestoneCmd, host: Option<&str>) -> Result<()> {
match cmd.command {
MilestoneSub::List(args) => list(args, host).await,
MilestoneSub::View(args) => view(args, host).await,
MilestoneSub::Create(args) => create(args, host).await,
MilestoneSub::Edit(args) => edit(args, host).await,
MilestoneSub::Close(args) => set_state(args, host, "closed").await,
MilestoneSub::Reopen(args) => set_state(args, host, "open").await,
MilestoneSub::Delete(args) => delete(args, host).await,
MilestoneSub::Assign(args) => assign(args, host).await,
}
}
async fn list(args: ListArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let items = api::milestone::list(&ctx.client, &ctx.owner, &ctx.name, &args.state).await?;
if args.json {
return output::print_json(&serde_json::to_value(&items)?);
}
if items.is_empty() {
println!("(no milestones)");
return Ok(());
}
let rows: Vec<Vec<String>> = items
.iter()
.map(|m| {
let due = m
.due_on
.map(|d| d.format("%Y-%m-%d").to_string())
.unwrap_or_default();
vec![
m.id.to_string(),
m.title.clone(),
m.state.clone(),
format!("{}/{}", m.closed_issues, m.open_issues + m.closed_issues),
due,
]
})
.collect();
print!(
"{}",
output::render_table(&["ID", "TITLE", "STATE", "DONE", "DUE"], &rows)
);
Ok(())
}
async fn view(args: IdArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let m = api::milestone::get(&ctx.client, &ctx.owner, &ctx.name, args.id).await?;
if args.json {
return output::print_json(&serde_json::to_value(&m)?);
}
let total = m.open_issues + m.closed_issues;
let pct = (m.closed_issues.checked_mul(100))
.and_then(|n| n.checked_div(total))
.unwrap_or(0);
println!("{}", output::bold(&format!("{} ({})", m.title, m.state)));
println!(
"Progress: {}/{} ({}%)",
m.closed_issues,
m.open_issues + m.closed_issues,
pct
);
if let Some(d) = m.due_on {
println!("Due: {}", d.format("%Y-%m-%d"));
}
if !m.description.is_empty() {
println!();
println!("{}", m.description);
}
Ok(())
}
async fn create(args: CreateArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let body = editor::read_body(args.body.as_deref())?;
let m = api::milestone::create(
&ctx.client,
&ctx.owner,
&ctx.name,
&CreateMilestone {
title: &args.title,
description: body.as_deref(),
due_on: args.due_on.as_deref(),
},
)
.await?;
println!("✓ Created milestone #{}: {}", m.id, m.title);
Ok(())
}
async fn edit(args: EditArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let body = if args.body_editor {
let existing = api::milestone::get(&ctx.client, &ctx.owner, &ctx.name, args.id).await?;
Some(editor::edit_text(
"MILESTONE_DESCRIPTION.md",
&existing.description,
)?)
} else {
args.body
};
let m = api::milestone::edit(
&ctx.client,
&ctx.owner,
&ctx.name,
args.id,
&EditMilestone {
title: args.title.as_deref(),
description: body.as_deref(),
state: None,
due_on: args.due_on.as_deref(),
},
)
.await?;
println!("✓ Updated milestone #{}: {}", m.id, m.title);
Ok(())
}
async fn set_state(args: IdArgs, host: Option<&str>, state: &str) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let m = api::milestone::edit(
&ctx.client,
&ctx.owner,
&ctx.name,
args.id,
&EditMilestone {
state: Some(state),
..Default::default()
},
)
.await?;
println!("✓ Milestone #{} is now {}", m.id, m.state);
Ok(())
}
async fn delete(args: DeleteArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
if !args.yes {
let ans = editor::prompt_line(&format!(
"Delete milestone {}? Type the id to confirm",
args.id
))?;
if ans != args.id.to_string() {
return Err(anyhow::anyhow!("aborted"));
}
}
api::milestone::delete(&ctx.client, &ctx.owner, &ctx.name, args.id).await?;
println!("✓ Deleted milestone {}", args.id);
Ok(())
}
async fn assign(args: AssignArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let milestone_id = if args.milestone.eq_ignore_ascii_case("none") {
None
} else {
Some(
args.milestone
.parse::<u64>()
.map_err(|_| anyhow::anyhow!("--milestone must be a number or 'none'"))?,
)
};
api::milestone::assign(&ctx.client, &ctx.owner, &ctx.name, args.issue, milestone_id).await?;
match milestone_id {
Some(id) => println!("✓ Assigned #{} to milestone {}", args.issue, id),
None => println!("✓ Detached #{} from any milestone", args.issue),
}
Ok(())
}

View file

@ -11,6 +11,7 @@ pub mod hook;
pub mod issue; pub mod issue;
pub mod key; pub mod key;
pub mod label; pub mod label;
pub mod milestone;
pub mod org; pub mod org;
pub mod pr; pub mod pr;
pub mod protect; pub mod protect;
@ -48,6 +49,11 @@ pub struct Cli {
#[arg(long, global = true)] #[arg(long, global = true)]
pub no_pager: bool, pub no_pager: bool,
/// Comma-separated list of fields to keep in `--json` output (gh-style
/// projection). Supports dotted paths like `owner.login`.
#[arg(long, global = true, value_name = "FIELDS")]
pub json_fields: Option<String>,
#[command(subcommand)] #[command(subcommand)]
pub command: Command, pub command: Command,
} }
@ -66,6 +72,8 @@ pub enum Command {
Release(release::ReleaseCmd), Release(release::ReleaseCmd),
/// Work with labels. /// Work with labels.
Label(label::LabelCmd), Label(label::LabelCmd),
/// Work with milestones.
Milestone(milestone::MilestoneCmd),
/// Manage workflow runs (Forgejo Actions). /// Manage workflow runs (Forgejo Actions).
Run(workflow::RunCmd), Run(workflow::RunCmd),
/// Manage Actions secrets. /// Manage Actions secrets.
@ -120,6 +128,7 @@ pub struct CompletionArgs {
pub async fn run(cli: Cli) -> Result<()> { pub async fn run(cli: Cli) -> Result<()> {
crate::client::set_debug(cli.debug); crate::client::set_debug(cli.debug);
crate::output::set_json_fields(cli.json_fields.as_deref());
// Page commands whose output can run long. Pass `cli.no_pager` through // Page commands whose output can run long. Pass `cli.no_pager` through
// explicitly so we don't have to mutate the process environment to opt // explicitly so we don't have to mutate the process environment to opt
// out, which would be a process-wide side effect and an unsafe set_var. // out, which would be a process-wide side effect and an unsafe set_var.
@ -131,6 +140,7 @@ pub async fn run(cli: Cli) -> Result<()> {
| Command::Search(_) | Command::Search(_)
| Command::Status(_) | Command::Status(_)
| Command::Label(_) | Command::Label(_)
| Command::Milestone(_)
| Command::Run(_) | Command::Run(_)
| Command::Api(_) => crate::output::pager::maybe_start(cli.no_pager), | Command::Api(_) => crate::output::pager::maybe_start(cli.no_pager),
_ => None, _ => None,
@ -159,6 +169,7 @@ async fn dispatch(command: Command, host: Option<&str>) -> Result<()> {
Command::Pr(cmd) => pr::run(cmd, host).await, Command::Pr(cmd) => pr::run(cmd, host).await,
Command::Release(cmd) => release::run(cmd, host).await, Command::Release(cmd) => release::run(cmd, host).await,
Command::Label(cmd) => label::run(cmd, host).await, Command::Label(cmd) => label::run(cmd, host).await,
Command::Milestone(cmd) => milestone::run(cmd, host).await,
Command::Run(cmd) => workflow::run(cmd, host).await, Command::Run(cmd) => workflow::run(cmd, host).await,
Command::Secret(cmd) => workflow::run_secret(cmd, host).await, Command::Secret(cmd) => workflow::run_secret(cmd, host).await,
Command::Variable(cmd) => workflow::run_variable(cmd, host).await, Command::Variable(cmd) => workflow::run_variable(cmd, host).await,

View file

@ -42,6 +42,10 @@ pub enum PrSub {
Ready(SimpleArgs), Ready(SimpleArgs),
/// Submit a review (approve / request changes / comment). /// Submit a review (approve / request changes / comment).
Review(ReviewArgs), Review(ReviewArgs),
/// Request reviews from specific users.
RequestReview(ReviewerArgs),
/// Remove pending reviewer requests.
UnrequestReview(ReviewerArgs),
/// Cross-repo dashboard of your PRs: created, review-requested, mentions. /// Cross-repo dashboard of your PRs: created, review-requested, mentions.
Status(StatusArgs), Status(StatusArgs),
/// Check out a pull request locally. /// Check out a pull request locally.
@ -186,6 +190,16 @@ pub struct ReviewArgs {
pub body: Option<String>, pub body: Option<String>,
} }
#[derive(Debug, Args)]
pub struct ReviewerArgs {
#[command(flatten)]
pub r: RepoFlag,
pub number: u64,
/// Reviewer usernames (repeatable, comma-separated also allowed).
#[arg(value_name = "USERNAME")]
pub reviewers: Vec<String>,
}
#[derive(Debug, Args)] #[derive(Debug, Args)]
pub struct StatusArgs { pub struct StatusArgs {
#[arg(short = 'L', long, default_value_t = 20)] #[arg(short = 'L', long, default_value_t = 20)]
@ -237,6 +251,8 @@ pub async fn run(cmd: PrCmd, host: Option<&str>) -> Result<()> {
PrSub::Checks(args) => checks(args, host).await, PrSub::Checks(args) => checks(args, host).await,
PrSub::Ready(args) => ready(args, host).await, PrSub::Ready(args) => ready(args, host).await,
PrSub::Review(args) => review(args, host).await, PrSub::Review(args) => review(args, host).await,
PrSub::RequestReview(args) => request_reviewers(args, host).await,
PrSub::UnrequestReview(args) => unrequest_reviewers(args, host).await,
PrSub::Status(args) => status(args, host).await, PrSub::Status(args) => status(args, host).await,
PrSub::Checkout(args) => checkout(args, host).await, PrSub::Checkout(args) => checkout(args, host).await,
PrSub::Merge(args) => merge(args, host).await, PrSub::Merge(args) => merge(args, host).await,
@ -574,6 +590,64 @@ async fn review(args: ReviewArgs, host: Option<&str>) -> Result<()> {
Ok(()) Ok(())
} }
async fn request_reviewers(args: ReviewerArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let reviewers = expand_reviewers(args.reviewers)?;
api::pull::request_reviewers(
&ctx.client,
&ctx.owner,
&ctx.name,
args.number,
reviewers.clone(),
)
.await?;
println!(
"✓ Requested review from {} on #{}",
reviewers.join(", "),
args.number
);
Ok(())
}
async fn unrequest_reviewers(args: ReviewerArgs, host: Option<&str>) -> Result<()> {
let ctx = resolve_repo(args.r.repo.as_deref(), host)?;
let reviewers = expand_reviewers(args.reviewers)?;
api::pull::unrequest_reviewers(
&ctx.client,
&ctx.owner,
&ctx.name,
args.number,
reviewers.clone(),
)
.await?;
println!(
"✓ Removed review request from {} on #{}",
reviewers.join(", "),
args.number
);
Ok(())
}
/// Accept reviewers as either repeated args or comma-separated within one
/// arg, since both forms come up naturally in shell invocations.
fn expand_reviewers(raw: Vec<String>) -> Result<Vec<String>> {
let mut out: Vec<String> = Vec::new();
for r in raw {
for piece in r.split(',') {
let p = piece.trim();
if !p.is_empty() {
out.push(p.to_string());
}
}
}
if out.is_empty() {
return Err(anyhow!(
"no reviewers provided (pass one or more usernames)"
));
}
Ok(out)
}
async fn status(args: StatusArgs, host: Option<&str>) -> Result<()> { async fn status(args: StatusArgs, host: Option<&str>) -> Result<()> {
// Cross-repo dashboard: PRs authored by you, and PRs where you're listed // Cross-repo dashboard: PRs authored by you, and PRs where you're listed
// as a reviewer or where you're mentioned. // as a reviewer or where you're mentioned.

View file

@ -49,6 +49,31 @@ pub enum RepoSub {
Mirror(MirrorArgs), Mirror(MirrorArgs),
/// Manually trigger a sync on a pull-mirror. /// Manually trigger a sync on a pull-mirror.
MirrorSync(MirrorSyncArgs), MirrorSync(MirrorSyncArgs),
/// Watch a repo (subscribe to its notifications).
Watch(SimpleRepoArgs),
/// Stop watching a repo.
Unwatch(SimpleRepoArgs),
/// Star a repo.
Star(SimpleRepoArgs),
/// Unstar a repo.
Unstar(SimpleRepoArgs),
/// List your starred repos.
Starred(StarredArgs),
}
#[derive(Debug, Args)]
pub struct SimpleRepoArgs {
pub repo: Option<String>,
#[arg(short = 'R', long = "repo")]
pub repo_flag: Option<String>,
}
#[derive(Debug, Args)]
pub struct StarredArgs {
#[arg(short = 'L', long, default_value_t = 30)]
pub limit: u32,
#[arg(long)]
pub json: bool,
} }
#[derive(Debug, Args)] #[derive(Debug, Args)]
@ -251,9 +276,66 @@ pub async fn run(cmd: RepoCmd, host: Option<&str>) -> Result<()> {
RepoSub::Topics(args) => topics(args, host).await, RepoSub::Topics(args) => topics(args, host).await,
RepoSub::Mirror(args) => mirror(args, host).await, RepoSub::Mirror(args) => mirror(args, host).await,
RepoSub::MirrorSync(args) => mirror_sync(args, host).await, RepoSub::MirrorSync(args) => mirror_sync(args, host).await,
RepoSub::Watch(args) => watch(args, host, true).await,
RepoSub::Unwatch(args) => watch(args, host, false).await,
RepoSub::Star(args) => star(args, host, true).await,
RepoSub::Unstar(args) => star(args, host, false).await,
RepoSub::Starred(args) => starred(args, host).await,
} }
} }
async fn watch(args: SimpleRepoArgs, host: Option<&str>, on: bool) -> Result<()> {
let ctx = resolve_repo(pick_repo(args.repo.as_ref(), args.repo_flag.as_ref()), host)?;
if on {
api::repo::watch(&ctx.client, &ctx.owner, &ctx.name).await?;
println!("✓ Watching {}/{}", ctx.owner, ctx.name);
} else {
api::repo::unwatch(&ctx.client, &ctx.owner, &ctx.name).await?;
println!("✓ Unwatched {}/{}", ctx.owner, ctx.name);
}
Ok(())
}
async fn star(args: SimpleRepoArgs, host: Option<&str>, on: bool) -> Result<()> {
let ctx = resolve_repo(pick_repo(args.repo.as_ref(), args.repo_flag.as_ref()), host)?;
if on {
api::repo::star(&ctx.client, &ctx.owner, &ctx.name).await?;
println!("★ Starred {}/{}", ctx.owner, ctx.name);
} else {
api::repo::unstar(&ctx.client, &ctx.owner, &ctx.name).await?;
println!("☆ Unstarred {}/{}", ctx.owner, ctx.name);
}
Ok(())
}
async fn starred(args: StarredArgs, host: Option<&str>) -> Result<()> {
let client = Client::connect(host)?;
let repos = api::repo::list_starred(&client, args.limit).await?;
if args.json {
return output::print_json(&serde_json::to_value(&repos)?);
}
if repos.is_empty() {
println!("(no starred repos)");
return Ok(());
}
let rows: Vec<Vec<String>> = repos
.iter()
.map(|r| {
let vis = if r.private { "private" } else { "public" };
vec![
r.full_name.clone(),
truncate(&r.description, 60),
vis.into(),
]
})
.collect();
print!(
"{}",
output::render_table(&["NAME", "DESCRIPTION", "VIS"], &rows)
);
Ok(())
}
async fn mirror(args: MirrorArgs, host: Option<&str>) -> Result<()> { async fn mirror(args: MirrorArgs, host: Option<&str>) -> Result<()> {
let client = Client::connect(host)?; let client = Client::connect(host)?;
let me = api::user::current(&client).await?; let me = api::user::current(&client).await?;

View file

@ -21,6 +21,20 @@ pub enum SearchSub {
Prs(QueryArgs), Prs(QueryArgs),
/// Search users. /// Search users.
Users(QueryArgs), Users(QueryArgs),
/// Search code across visible repos (or a single repo with `-R`).
Code(CodeArgs),
}
#[derive(Debug, Args)]
pub struct CodeArgs {
pub query: String,
/// Restrict to one repo. Omit to search everything visible.
#[arg(short = 'R', long = "repo")]
pub repo: Option<String>,
#[arg(short = 'L', long, default_value_t = 20)]
pub limit: u32,
#[arg(long)]
pub json: bool,
} }
#[derive(Debug, Args)] #[derive(Debug, Args)]
@ -39,9 +53,51 @@ pub async fn run(cmd: SearchCmd, host: Option<&str>) -> Result<()> {
SearchSub::Issues(args) => search_issues(&client, args, "issues").await, SearchSub::Issues(args) => search_issues(&client, args, "issues").await,
SearchSub::Prs(args) => search_issues(&client, args, "pulls").await, SearchSub::Prs(args) => search_issues(&client, args, "pulls").await,
SearchSub::Users(args) => search_users(&client, args).await, SearchSub::Users(args) => search_users(&client, args).await,
SearchSub::Code(args) => search_code(&client, args).await,
} }
} }
async fn search_code(client: &Client, args: CodeArgs) -> Result<()> {
let hits = match args.repo.as_deref() {
Some(slug) => {
let (owner, name) = crate::api::split_repo(slug)?;
crate::api::search::code_in_repo(client, owner, name, &args.query, args.limit).await?
}
None => crate::api::search::code(client, &args.query, args.limit).await?,
};
if args.json {
return output::print_json(&serde_json::to_value(&hits)?);
}
if hits.is_empty() {
println!("(no matches)");
return Ok(());
}
for h in hits {
println!(
"{} {}",
output::bold(&h.path),
output::dim(&format!(
"{} match{}",
h.matches,
if h.matches == 1 { "" } else { "es" }
)),
);
let snippet = if h.raw_lines.is_empty() {
&h.formatted_lines
} else {
&h.raw_lines
};
for line in snippet.lines().take(5) {
println!(" {line}");
}
if !h.html_url.is_empty() {
println!(" {}", output::dim(&h.html_url));
}
println!();
}
Ok(())
}
async fn search_repos(client: &Client, args: QueryArgs) -> Result<()> { async fn search_repos(client: &Client, args: QueryArgs) -> Result<()> {
let hits = api::search::repos(client, &args.query, args.limit).await?; let hits = api::search::repos(client, &args.query, args.limit).await?;
if args.json { if args.json {

View file

@ -194,6 +194,34 @@ async fn get_all_respects_total_limit() {
assert_eq!(items.len(), 30); assert_eq!(items.len(), 30);
} }
#[tokio::test]
async fn honors_retry_after_on_429() {
let server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/api/v1/rate"))
.respond_with(
ResponseTemplate::new(429)
.insert_header("Retry-After", "0")
.set_body_json(json!({"message": "slow down"})),
)
.up_to_n_times(2)
.expect(2)
.mount(&server)
.await;
Mock::given(method("GET"))
.and(path("/api/v1/rate"))
.respond_with(ResponseTemplate::new(200).set_body_json(json!({"ok": true})))
.expect(1)
.mount(&server)
.await;
let c = client_for(&server).await;
let res: serde_json::Value = c
.json(Method::GET, "/api/v1/rate", &[], None::<&()>)
.await
.expect("eventually ok");
assert_eq!(res["ok"], true);
}
#[tokio::test] #[tokio::test]
async fn header_value_rejects_bad_token() { async fn header_value_rejects_bad_token() {
let server = MockServer::start().await; let server = MockServer::start().await;

View file

@ -224,6 +224,18 @@ impl Client {
match self.http.execute(req).await { match self.http.execute(req).await {
Ok(res) => { Ok(res) => {
let status = res.status(); let status = res.status();
// 429 (rate limit): always honor Retry-After when retryable.
if status == StatusCode::TOO_MANY_REQUESTS && attempt + 1 < retries {
let wait = parse_retry_after(&res).unwrap_or_else(|| {
std::time::Duration::from_millis(200 * (1u64 << attempt))
});
let wait = wait.min(std::time::Duration::from_secs(30));
if debug_enabled() {
eprintln!("← 429 {} (rate-limited, sleeping {:?})", res.url(), wait);
}
tokio::time::sleep(wait).await;
continue;
}
if status.is_server_error() && attempt + 1 < retries { if status.is_server_error() && attempt + 1 < retries {
if debug_enabled() { if debug_enabled() {
eprintln!( eprintln!(
@ -377,6 +389,16 @@ fn is_idempotent(m: &Method) -> bool {
matches!(m.as_str(), "GET" | "HEAD" | "OPTIONS" | "PUT" | "DELETE") matches!(m.as_str(), "GET" | "HEAD" | "OPTIONS" | "PUT" | "DELETE")
} }
/// Parse a Retry-After response header. Supports both delta-seconds (the
/// common case) and HTTP-date (which we don't bother parsing — we just bail
/// out and let the caller use exponential backoff instead).
fn parse_retry_after(res: &Response) -> Option<std::time::Duration> {
let v = res.headers().get(reqwest::header::RETRY_AFTER)?;
let s = v.to_str().ok()?.trim();
let secs: u64 = s.parse().ok()?;
Some(std::time::Duration::from_secs(secs))
}
async fn backoff(attempt: u32) { async fn backoff(attempt: u32) {
let base_ms: u64 = 200; let base_ms: u64 = 200;
let delay = base_ms * (1u64 << attempt); let delay = base_ms * (1u64 << attempt);

View file

@ -85,6 +85,7 @@ const KNOWN_SUBCOMMANDS: &[&str] = &[
"pr", "pr",
"release", "release",
"label", "label",
"milestone",
"run", "run",
"secret", "secret",
"variable", "variable",

107
src/output/json_filter.rs Normal file
View file

@ -0,0 +1,107 @@
//! gh-style `--json field1,field2` projection. Given a JSON value (which is
//! typically either an object representing a single resource or an array of
//! such objects) and a comma-separated list of field names, return a
//! projection that contains only those fields.
//!
//! Nested fields (`a.b`) are supported. Missing fields produce `null` rather
//! than an error, matching gh's behavior.
use serde_json::{Map, Value};
/// Parse `"a,b , c"` into `["a", "b", "c"]`.
pub fn parse_fields(spec: &str) -> Vec<String> {
spec.split(',')
.map(|s| s.trim().to_string())
.filter(|s| !s.is_empty())
.collect()
}
/// Project a value down to the requested fields. If `value` is an array,
/// each element is projected independently. Other scalar values pass through
/// unchanged.
pub fn project(value: &Value, fields: &[String]) -> Value {
match value {
Value::Array(items) => Value::Array(items.iter().map(|v| project_one(v, fields)).collect()),
Value::Object(_) => project_one(value, fields),
other => other.clone(),
}
}
fn project_one(value: &Value, fields: &[String]) -> Value {
let mut out = Map::new();
for f in fields {
// The output key is the full dotted path so users can disambiguate.
// A user asking for `owner.login` gets that exact key in the result.
let extracted = extract_path(value, f);
out.insert(f.clone(), extracted);
}
Value::Object(out)
}
fn extract_path(value: &Value, path: &str) -> Value {
let mut current = value;
for segment in path.split('.') {
current = match current.get(segment) {
Some(v) => v,
None => return Value::Null,
};
}
current.clone()
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
#[test]
fn parses_field_list() {
assert_eq!(parse_fields("a,b,c"), vec!["a", "b", "c"]);
assert_eq!(parse_fields(" a , b , c "), vec!["a", "b", "c"]);
assert_eq!(parse_fields(""), Vec::<String>::new());
assert_eq!(parse_fields(", ,a"), vec!["a"]);
}
#[test]
fn projects_single_object() {
let v = json!({"id": 1, "title": "hi", "body": "long"});
let p = project(&v, &["id".into(), "title".into()]);
assert_eq!(p, json!({"id": 1, "title": "hi"}));
}
#[test]
fn projects_array_of_objects() {
let v = json!([
{"id": 1, "title": "a", "body": "x"},
{"id": 2, "title": "b", "body": "y"},
]);
let p = project(&v, &["id".into(), "title".into()]);
assert_eq!(
p,
json!([
{"id": 1, "title": "a"},
{"id": 2, "title": "b"},
])
);
}
#[test]
fn supports_nested_fields() {
let v = json!({"owner": {"login": "alice", "id": 7}});
let p = project(&v, &["owner.login".into()]);
assert_eq!(p, json!({"owner.login": "alice"}));
}
#[test]
fn missing_field_yields_null() {
let v = json!({"id": 1});
let p = project(&v, &["id".into(), "missing".into()]);
assert_eq!(p, json!({"id": 1, "missing": null}));
}
#[test]
fn scalar_passes_through() {
let v = json!("hello");
assert_eq!(project(&v, &["id".into()]), json!("hello"));
}
}

View file

@ -1,8 +1,29 @@
pub mod json_filter;
pub mod pager; pub mod pager;
use std::io::IsTerminal; use std::io::IsTerminal;
use std::sync::OnceLock;
use anyhow::Result; use anyhow::Result;
static JSON_FIELDS: OnceLock<Vec<String>> = OnceLock::new();
/// Set the global `--json-fields` selector. Called once at startup from
/// `cli::run`. After this, every `print_json` call routes through the filter.
pub fn set_json_fields(spec: Option<&str>) {
let parsed: Vec<String> = match spec {
Some(s) => json_filter::parse_fields(s),
None => Vec::new(),
};
let _ = JSON_FIELDS.set(parsed);
}
fn json_fields() -> Option<&'static [String]> {
JSON_FIELDS
.get()
.filter(|v| !v.is_empty())
.map(|v| v.as_slice())
}
use owo_colors::{OwoColorize, Style}; use owo_colors::{OwoColorize, Style};
use tabled::settings::{object::Columns, Alignment, Modify, Padding, Style as TStyle}; use tabled::settings::{object::Columns, Alignment, Modify, Padding, Style as TStyle};
use tabled::{builder::Builder, Table}; use tabled::{builder::Builder, Table};
@ -40,9 +61,14 @@ pub fn render_table(headers: &[&str], rows: &[Vec<String>]) -> String {
table.to_string() table.to_string()
} }
/// Print JSON pretty-formatted to stdout. /// Print JSON pretty-formatted to stdout. If `--json-fields foo,bar` was
/// set globally, the value is projected through that filter first.
pub fn print_json(value: &serde_json::Value) -> Result<()> { pub fn print_json(value: &serde_json::Value) -> Result<()> {
let text = serde_json::to_string_pretty(value)?; let projected = match json_fields() {
Some(fs) => json_filter::project(value, fs),
None => value.clone(),
};
let text = serde_json::to_string_pretty(&projected)?;
println!("{text}"); println!("{text}");
Ok(()) Ok(())
} }

View file

@ -1,29 +1,30 @@
//! Pager redirection. When stdout is a TTY (and the user hasn't opted out), //! Pager redirection. When stdout is a TTY (and the user hasn't opted out),
//! `maybe_start()` spawns `$FJ_PAGER` / `$PAGER` / `less -FRX` and dup2's our //! `maybe_start()` spawns `$FJ_PAGER` / `$PAGER` / `less -FRX` and redirects
//! stdout to its stdin. The returned guard waits on the child on drop, so all //! our stdout into its stdin. The returned guard waits on the child on drop,
//! output gets flushed before the process exits. //! so all output gets flushed before the process exits.
//! //!
//! The dup2 strategy means the rest of fj keeps using plain `println!` / //! Implemented for Unix via `libc::dup2`. The Windows build provides a stub
//! `print!` without knowing it's piped. `less -FRX` exits immediately if the //! that returns `None` from `maybe_start`, so `println!` writes directly to
//! content fits one screen, so short output is unaffected. //! the terminal as usual. (Implementing the Windows version would mean
//! duplicating the stdout HANDLE via `SetStdHandle`; not done yet.)
use std::io::IsTerminal; #[cfg(unix)]
use std::os::fd::IntoRawFd; mod imp {
use std::os::unix::io::AsRawFd; use std::io::IsTerminal;
use std::process::{Child, Command, Stdio}; use std::os::fd::IntoRawFd;
use std::os::unix::io::AsRawFd;
use std::process::{Child, Command, Stdio};
pub struct PagerGuard { pub struct PagerGuard {
/// Saved copy of the original stdout fd so we can restore it before /// Saved copy of the original stdout fd so we can restore it before
/// waiting on the child (otherwise the child would block reading from a /// waiting on the child (otherwise the child would block reading from
/// stdin pipe we still hold open). /// a stdin pipe we still hold open).
saved_stdout: Option<i32>, saved_stdout: Option<i32>,
child: Option<Child>, child: Option<Child>,
} }
impl Drop for PagerGuard { impl Drop for PagerGuard {
fn drop(&mut self) { fn drop(&mut self) {
// Make sure anything buffered in stdout reaches the pager before we
// close our write end.
use std::io::Write; use std::io::Write;
let _ = std::io::stdout().flush(); let _ = std::io::stdout().flush();
@ -37,15 +38,9 @@ impl Drop for PagerGuard {
let _ = c.wait(); let _ = c.wait();
} }
} }
} }
/// If conditions are right, redirect this process's stdout to a pager. Returns pub fn maybe_start(force_disabled: bool) -> Option<PagerGuard> {
/// `None` if paging is disabled, stdout isn't a TTY, or spawning the pager
/// failed.
///
/// The returned guard must outlive all `println!`/`print!` calls produced by
/// the current command. Pass `force_disabled=true` to honor `--no-pager`.
pub fn maybe_start(force_disabled: bool) -> Option<PagerGuard> {
if force_disabled || std::env::var_os("FJ_NO_PAGER").is_some() { if force_disabled || std::env::var_os("FJ_NO_PAGER").is_some() {
return None; return None;
} }
@ -64,7 +59,6 @@ pub fn maybe_start(force_disabled: bool) -> Option<PagerGuard> {
let program = parts.next()?; let program = parts.next()?;
let args: Vec<&str> = parts.collect(); let args: Vec<&str> = parts.collect();
// Save the current stdout so the guard can restore it.
let saved = unsafe { libc::dup(libc::STDOUT_FILENO) }; let saved = unsafe { libc::dup(libc::STDOUT_FILENO) };
if saved < 0 { if saved < 0 {
return None; return None;
@ -84,7 +78,6 @@ pub fn maybe_start(force_disabled: bool) -> Option<PagerGuard> {
} }
}; };
// Take the child's stdin pipe and dup it onto our stdout.
let Some(stdin) = child.stdin.take() else { let Some(stdin) = child.stdin.take() else {
unsafe { libc::close(saved) }; unsafe { libc::close(saved) };
let _ = child.kill(); let _ = child.kill();
@ -99,12 +92,29 @@ pub fn maybe_start(force_disabled: bool) -> Option<PagerGuard> {
let _ = child.wait(); let _ = child.wait();
return None; return None;
} }
// We dup'd the pipe fd onto stdout. Drop the ChildStdin handle without
// closing its underlying fd (it's now stdout's fd, owned by the process).
let _ = stdin.into_raw_fd(); let _ = stdin.into_raw_fd();
Some(PagerGuard { Some(PagerGuard {
saved_stdout: Some(saved), saved_stdout: Some(saved),
child: Some(child), child: Some(child),
}) })
}
} }
#[cfg(not(unix))]
mod imp {
/// Stub guard for non-Unix platforms. Holds no state.
pub struct PagerGuard;
/// Paging is not implemented on this platform; always returns `None` so
/// stdout stays connected to the terminal directly. `--no-pager` and
/// `FJ_NO_PAGER` are accepted (and respected by being a no-op) for
/// compatibility with shared scripts.
pub fn maybe_start(_force_disabled: bool) -> Option<PagerGuard> {
None
}
}
pub use imp::maybe_start;
#[allow(unused_imports)]
pub use imp::PagerGuard;