diff --git a/README.md b/README.md index d01c9c0e47178e9cb663224e8b9e88e6b4ae4310..ce86455c4aacd6e8aa0fd8861bba4840f9f32d40 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ See [this document](docs/BASICDEMO.md) if you would like to run the basic demo. *The container is completely ephemeral. Once you remove the container, all of your data in wwwherd will be gone. You an export much of your setup in the Admin screen, but it will not keep your run history and other stuff.* -### Running with TSL/SSL. +### Running with TLS/SSL. To run with SSL, you need to mount a volume to /wwwherd/import inside the container. The volume must container two files--server.key and server.cert. These are your SSL key and certificate. Self-signed will work ok, but your @@ -69,6 +69,19 @@ this is with the following command: openssl rand -base64 32 ``` +### Sending data to OpenTelemetry collector. + +To send data to an [OpenTelemetry](https://opentelemetry.io/) [collector](https://opentelemetry.io/docs/collector/), simply +add the environment variable "OTELENDPOINT" to the docker run command. + +The following is an example of how to run the container: +``` +docker run -p80:80 -p8888:8888 -e OTELENDPOINT='grpc://191.168.1.41:4317/' -d wwwherd/all:latest +``` + +GRPC transport is tested and supported. HTTP has been implemented, but it is not regularly tested and thus not directly +supported. (Contact us if you have an urgent need for it.) + ### Running the demo. Now that you have s running container, you can run our demo to see how it works. Just follow [these instructions](docs/BASICDEMO.md). @@ -78,6 +91,8 @@ Now that you have s running container, you can run our demo to see how it works. - [Building and developing on your laptop/pc](docs/LOCALDEV.md) - [Basic demo](docs/BASICDEMO.md) - [AI Providers](docs/PROVIDERS.md) +- [Architecture and design information](docs/ARCH.md) +- [REST API](docs/RESTAPI.md) ## Links diff --git a/RELEASE b/RELEASE index 31f4ef9b4616e3aa7cf22e856b0feba9e5e101ee..115f3c3d71f57d8f0291eac40eaa77ee71044375 100644 --- a/RELEASE +++ b/RELEASE @@ -1,3 +1,51 @@ +------------------------------------------------------------------------------------------------------------------------ +0.1.1 Feature release + +- Add simple configuration of OpenTelemetry to all-in-one container. (otel is inherently supported in the ginfra + service base and need only be configured to start sending data) +- Store case history as a diff. Add case history diff rendering component, as well as a component to show all history. +- Add milestones to case charts. Currently adding case history as milestones, but expandable to other milestones. +- Add charting for case cost data and performance data. +- Add change history for cases. +- Add admin UI to wipe the database. +- Improve log rendering to make it easier to see. +- Add bug reporting for cases. Add UI to create and maange bugs. +- Accept or autogenerate JWT secrets used for authorization. At a minimum, Generate JWT secret when starting the + all-in-one container. +- Cost statistics are gathered and stored in the stats tables and sent to an OpenTelemetry collector, if configured. + Total cost info is shown in the run report. +- Add a run report with an option to download json version. Export report generation as a REST api. +- Add admin UI to add authorization tokens for use with the rest api. +- Add rest api access points to allow starting and managing runs. This was a request by a team that plan on hooking it + up to their CI. +- Improve documentation around architecture, rest and other items. +- Add scripts to clean and populate database to enable testing. Includes data autogeneration based on parameters. +- Add logging in dispatcher to improve statistics capture. +- New website that includes a feature tour. + +Fixes and design changes +- Fix agent browser startup so failures dont break the runner login and are appropriately reported to the user. It + specifically calls out when the target host is unreachable. +- Fixed several severe bugs that allowed the runner to get "lost" and ghost runs. +- Change default runner for OpenRouter to qwen/qwen2.5-vl-72b-instruct. The free version just doesn't work due to rate + limiting causing to many timeouts. +- Reformat runner logs to a consistent format that is easier and much faster to render. +- Refactor runner to use message based system, rather than stdin and stdout for control. New scheme could use same message + mechanism that is planned to replace dispatcher and runner communication, dramatically improving the cloudification of all + parts. +- Remove all references to organization id to or in the frontend. +- Fix design and run queries in server to improve performance and accuracy. +- Standardize snackbar component across all pages and tabs. +- Fix db to allow usernames be replicated across organizations, though a username must be unique without an org. +- Fix severe server bug where race conditions were caused by not await'ing some calls. +- Modify server db client to allow for using the postgres user password. (Needed for the DB wipe). The postgres user + password is never stored and is gathered from the user at the moment it is needed. +- Fix bug in server where run data was fetched from db instead of design data. +- Create separate vue components for CaseDialog, DecorationChips, FilterTagsDialog, LogsDialog, ReportDialog, + SnackbarNotification and CaseData, to clean up route view components and improve reusability. +- Fix the testapp build to properly include the animal pictures. + +------------------------------------------------------------------------------------------------------------------------ 0.1.0 Initial release. This contains the original concept and foundations for the future. This is the minimal capability. From here diff --git a/VERSION b/VERSION index 6c6aa7cb0918dc7a1cfa3635fb7f8792ac4cb218..6da28dde76d6550e3d398a70a9a8231256774669 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -0.1.0 \ No newline at end of file +0.1.1 \ No newline at end of file diff --git a/bin/convert.sh b/bin/convert.sh new file mode 100755 index 0000000000000000000000000000000000000000..3eea2bd744d2604312a54d486432ced895283853 --- /dev/null +++ b/bin/convert.sh @@ -0,0 +1,3 @@ +#!/bin/bash +ffmpeg -i ${1} -c:v libvpx-vp9 -b:v 0 -crf 23 -pass 1 -an -f webm -row-mt 1 -y /dev/null +ffmpeg -i ${1} -vf "scale=1000:594:flags=lanczos" -c:v libvpx-vp9 -b:v 0 -crf 23 -pass 2 -an -row-mt 1 ${2} diff --git a/common/data/db.go b/common/data/db.go index d0035c1fdffbeaf49922155942989c5861f8f256..1c8778bda10c2b062d446de74035b73ca3d40b9b 100644 --- a/common/data/db.go +++ b/common/data/db.go @@ -11,12 +11,12 @@ package data // RunStatus values for tracking run state const ( - RunStatusNew = 1 - RunStatusWaiting = 2 - RunStatusRunning = 3 - RunStatusCancelled = 4 - RunStatusCancelling = 5 - RunStatusDone = 6 + RunStatusNew = 1 + RunStatusWaiting = 2 + RunStatusRunning = 3 + RunStatusCancelling = 4 + RunStatusCancelled = 5 + RunStatusDone = 6 RunStatusError = 7 // Additional related to case management. @@ -26,7 +26,7 @@ const ( RunStatusSkipped = 10 RunStatusBlocked = 11 RunStatusDisabled = 12 - RunStatusUnknown = 123 + RunStatusUnknown = 13 ) // RunStatus represents the current state of a run @@ -69,3 +69,10 @@ func (state RunStatus) String() string { panic("BUG: nonexistent RunState") } } + +const ( + ProviderAnthropic = 1 + ProviderOpenRouter = 2 + ProviderSelfHost = 3 +) + diff --git a/common/data/strings.go b/common/data/strings.go new file mode 100644 index 0000000000000000000000000000000000000000..d558f5e31c426d9b2460dba9069be56bef2a2a80 --- /dev/null +++ b/common/data/strings.go @@ -0,0 +1,46 @@ +/* +Package data +Copyright (c) 2025 Ginfra Project +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt + +Unified message strings for messages (logs, metrics, etc). These are simple constants now but they can be abstracted later. +The values are consistent and can be used for data mining. +*/ +package data + +const ( + SMsgRunnerDone = "Done" + + // -- General ------------------------------------------------------------------------------------------------------ + + SMsgGeneralFailedSendStatus = "Failed to send status" + SMsgGeneralFailedSendOrders = "Failed to send orders" + + // -- Cost messages ------------------------------------------------------------------------------------------------ + + SMsgCostsInputCost = "latestInputCost" + SMsgCostsOutputCost = "latestOutputCost" + SMsgCostsInputTokens = "latestInputTokens" + SMsgCostsOutputTokens = "latestOutputTokens" + SMsgCostsCacheWriteInputTokens = "cacheWriteInputTokens" + SMsgCostsCacheReadInputTokens = "cacheReadInputTokens" + SMsgCostsCaseExecutionTime = "caseExecutionTime" + + // -- Runner messages ---------------------------------------------------------------------------------------------- + + SMsgRunnerRunning = "Running" + SMsgRunnerFailedSendRunLog = "Failed to send run log" + SMsgRunnerFailedSendCurrentFlow = "Failed to send current flow" + SMsgRunnerFailedSendCurrentCase = "Failed to send current case" + SMsgRunnerLostProcess = "Lost process has occurred" + SMsgRunnerTerminatingScriptError = "Terminating script due to error" + SMsgRunnerTerminatingStartError = "Terminating script due to startup error" + SMsgRunnerStartingFlow = "Starting flow" + SMsgRunnerStartingCase = "Starting case" + SMsgRunnerDonePendingResults = "Done, pending results" + SMsgRunnerBadDirectiveResult = "Bad directive result" + SMsgRunnerFault = "Runner fault" + SMsgRunnerMsgRx = "Runner message received" + SMsgRunnerMsgRxPanic = "Runner panic on message received" + SMsgRunnerMsgRxFatal = "Runner fatal error on message received" +) diff --git a/common/data/types.go b/common/data/types.go index f39b7720d9f77ca6bd03055f101bd6fac0e1b9f9..c14c5860f1370c6c8344c52820120f1bec112a9d 100644 --- a/common/data/types.go +++ b/common/data/types.go @@ -25,36 +25,35 @@ type OrderCaseItem struct { Type CaseType `json:"type"` Result RunStatus `json:"result"` - Order string `json:"order"` + Order string `json:"order"` Decorations string `json:"decorations"` // Runtime - //Log string `json:"log,omitempty"` - Error string `json:"error,omitempty"` - Time int64 `json:"time,omitempty"` - + Error string `json:"error,omitempty"` + Time int64 `json:"time,omitempty"` + Costs string `json:"costs,omitempty"` + // Runtime discarded - Compiled interface{} `json:"-"` - Decorated Decorations `json:"-"` + Compiled interface{} `json:"-"` + Decorated Decorations `json:"-"` } type OrderFlowItem struct { - Id int `json:"id"` - Name string `json:"name"` - Narrative string `json:"narrative"` - Decorations string `json:"decorations"` - Prompt string `json:"prompt,omitempty"` - Host string `json:"host"` - Cases []OrderCaseItem `json:"cases"` - Result RunStatus `json:"result,omitempty"` + Id int `json:"id"` + Name string `json:"name"` + Narrative string `json:"narrative"` + Decorations string `json:"decorations"` + Prompt string `json:"prompt,omitempty"` + Cases []OrderCaseItem `json:"cases"` + Result RunStatus `json:"result,omitempty"` // Runtime - Log string `json:"log,omitempty"` - Error string `json:"error,omitempty"` - Time int64 `json:"time,omitempty"` + //Log string `json:"log,omitempty"` + Error string `json:"error,omitempty"` + Time int64 `json:"time,omitempty"` // Runtime discarded - Decorated Decorations `json:"-"` + Decorated Decorations `json:"-"` } type Order struct { @@ -72,14 +71,14 @@ func (r *OrderCaseItem) IsFailed() bool { // = STATISTICS type ResultReport struct { - Errored int `json:"errored"` - Passed int `json:"passed"` - Failed int `json:"failed"` - Skipped int `json:"skipped"` - Blocked int `json:"blocked"` - Disabled int `json:"disabled"` - Unknown int `json:"unknown"` - Total int `json:"total"` + Errored int `json:"errored"` + Passed int `json:"passed"` + Failed int `json:"failed"` + Skipped int `json:"skipped"` + Blocked int `json:"blocked"` + Disabled int `json:"disabled"` + Unknown int `json:"unknown"` + Total int `json:"total"` Time int64 `json:"time"` // In milliseconds } @@ -121,12 +120,12 @@ type Decoration int const ( DecorationStrictText = "strict" - DecorationSoftText = "soft" + DecorationSoftText = "soft" ) const ( DecorationStrict Decoration = 1 - DecorationSoft Decoration = 2 + DecorationSoft Decoration = 2 ) // DecorationFromString converts a text token to Decoration enumeration diff --git a/common/data/values.go b/common/data/values.go new file mode 100644 index 0000000000000000000000000000000000000000..00460d296b684f5f222063586a1cb3cb4c0f8ce3 --- /dev/null +++ b/common/data/values.go @@ -0,0 +1,11 @@ +/* +Package data +Copyright (c) 2025 Ginfra Project +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt + +Common values. +*/ +package data + +const DefaultLLMModelAnthropic = "claude-sonnet-4-20250514" +const DefaultLLMModelOpenApi = "qwen/qwen2.5-vl-72b-instruct" diff --git a/common/script/flow/types.go b/common/script/flow/types.go index e8039011aabe94cf5c21a0c1b48ff5225d56a5ed..2660266731570a5108e75979efb9cf1bdcebca5b 100644 --- a/common/script/flow/types.go +++ b/common/script/flow/types.go @@ -32,13 +32,15 @@ const ( FlowDirectiveCheck = "##CHECK" FlowDirectiveData = "##DATA" FlowDirectivePrompt = "##PROMPT" - FlowDirectivePromptSet = "##PROMPTSET" FlowDirectivePromptClear = "##PROMPTCLEAR" FlowDirectivePromptGlobal = "##PROMPTGLOBAL" FlowDirectivePromptGlobalClear = "##PROMPTGLOBALCLEAR" FlowDirectiveNav = "##NAV" FlowDirectivePing = "##PING" FlowDirectiveRaw = "##RAW" + + // TODO + FlowDirectivePromptSet = "##PROMPTSET" ) // String returns the string representation of the directive type @@ -151,6 +153,6 @@ func (d Directive) String() string { } type CompiledCase struct { - Directives []Directive + Directives []Directive Decorations data.Decorations } diff --git a/common/script/run_windows.go b/common/script/run_windows.go index c89172cbab210323ad2bd06be4b6f958836406ed..ac4daa6c3720f90b38e83bf2fa152b1b1b46dfb2 100644 --- a/common/script/run_windows.go +++ b/common/script/run_windows.go @@ -12,7 +12,7 @@ package script import ( "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/local" "go.uber.org/zap" "os/exec" "strconv" diff --git a/common/script/runner.go b/common/script/runner.go index fea51ef2e255ea80fdcbd37eb1075e47fa1a5886..97250f6fd304ca39ea6c2768d71df24046c5c2a0 100644 --- a/common/script/runner.go +++ b/common/script/runner.go @@ -8,14 +8,13 @@ Runner package script import ( - "fmt" "gitlab.com/ginfra/ginfra/base" "io" "os" "os/exec" "strconv" "strings" - "time" + "path/filepath" ) var whereIsNode string @@ -88,7 +87,7 @@ type Runner struct { Err error } -func NewRunner(target, disposition, provider, model, key, url string) (*Runner, error) { +func NewRunner(token, server, target, path, provider, model, key, url string) (*Runner, error) { var r Runner switch provider { @@ -106,7 +105,7 @@ func NewRunner(target, disposition, provider, model, key, url string) (*Runner, } provider = "openai-generic" if model == "" { - model = "qwen/qwen2.5-vl-72b-instruct:free" + model = "qwen/qwen2.5-vl-72b-instruct" url = "https://openrouter.ai/api/v1" } case "3": @@ -116,11 +115,11 @@ func NewRunner(target, disposition, provider, model, key, url string) (*Runner, provider = "openai-generic" } - r.Start(target, disposition, provider, model, key, url) + r.Start(token, server, target, path, provider, model, key, url) return &r, nil } -func (r *Runner) Start(target, disposition, provider, model, key, url string) { +func (r *Runner) Start(token, server, target, path, provider, model, key, url string) { // TODO WARNING this is not entirely reliable, but there aren't a lot of catch all ways to do it. var inContainer bool if _, err := os.Stat("/.dockerenv"); err == nil { @@ -128,27 +127,24 @@ func (r *Runner) Start(target, disposition, provider, model, key, url string) { } if r.Cmd == nil { - var err error if inContainer { // xvfb-run exec tsx src/index.ts http://localhost anthropic "" "" "" - r.Cmd = exec.Command(whereIsNode, "xvfb-run", "npm", "exec", "tsx", "src/index.ts", target, disposition, provider, key, model, url) + r.Cmd = exec.Command(whereIsNode, "xvfb-run", "npm", "exec", "tsx", "src/index.ts", token, server, target, path, provider, key, model, url) } else { - r.Cmd = exec.Command(whereIsNode, "npm", "exec", "tsx", "src/index.ts", target, disposition, provider, key, model, url) + r.Cmd = exec.Command(whereIsNode, "npm", "exec", "tsx", "src/index.ts", token, server, target, path, provider, key, model, url) } - r.Cmd.Dir = "." - r.Cmd.Env = os.Environ() - r.Stdin, err = r.Cmd.StdinPipe() - if err != nil { - panic(err) - } - r.Stdout, err = r.Cmd.StdoutPipe() + + outfile, err := os.Create(filepath.Join(path, "command.log")) if err != nil { - panic(err) + panic("Could not create command log: " + err.Error()) } - go func() { - r.Err = r.Cmd.Run() - fmt.Println("Exited with error:", r.Err) - }() + r.Cmd.Stdout = outfile + r.Cmd.Stderr = outfile + + r.Cmd.Dir = "." + r.Cmd.Env = os.Environ() + r.Cmd.Start() + } } @@ -161,13 +157,11 @@ func (r *Runner) IsRunning() bool { } func (r *Runner) Kill() error { - r.Stdin.Write([]byte("##QUIT\n")) - time.Sleep(100 * time.Millisecond) - + if r.IsRunning() { err := TerminateCommand(r.Cmd) r.Cmd = nil - return err + return err } return nil } diff --git a/common/script/types.go b/common/script/types.go index ba9f91e8851eecb2bb7da35c1e4054f8a0027b3a..b4407fe2736d2cc5aed8bf5871f28f6de20fbf07 100644 --- a/common/script/types.go +++ b/common/script/types.go @@ -14,5 +14,5 @@ type Compiler interface { } type Dispatcher interface { - RunCase(runner *Runner, thecase *data.OrderCaseItem, fdec *data.Decorations, values map[string]interface{}, prompt string) ([]interface{}, error) + RunCase(thecase *data.OrderCaseItem, fdec *data.Decorations, values map[string]interface{}, prompt string) ([]interface{}, error) } diff --git a/common/stream/catalog.go b/common/stream/catalog.go index 83567271eec3479d428b0051cd6075cda9b8e625..5f4845b816fb5c10ea4bff44e859fda7e290a261 100644 --- a/common/stream/catalog.go +++ b/common/stream/catalog.go @@ -19,6 +19,7 @@ import ( type MessageType int // RawMessage On and off wire format. +// TODO putting the type in the extracted message is redundant now, but I dont plan on touching it for now because I don't know what it could break. type RawMessage struct { MType MessageType `json:"mtype"` Data interface{} `json:"data"` @@ -37,6 +38,14 @@ const ( MsgTypeRunCurrentCase MsgTypeCancelComplete MsgTypeRunLog + MsgTypeReady + MsgTypeFault + MsgTypeOrderFlow + MsgTypeOrderCase + MsgTypeOrderDirective + MsgTypeResult + MsgTypeDone + MsgTypeClosing ) type msgRegistryEntry struct { @@ -96,6 +105,60 @@ var messageTypeRegistry = []msgRegistryEntry{ return &MsgRunLog{} }, }, + { + MsgTypeReady, // #7 + "Ready", + func() interface{} { + return &MsgReady{} + }, + }, + { + MsgTypeFault, // #8 + "Fault", + func() interface{} { return &MsgFault{} }, + }, + { + MsgTypeOrderFlow, // #9 + "Order Flow", + func() interface{} { + return &MsgOrderFlow{} + }, + }, + { + MsgTypeOrderCase, // #10 + "Order Case", + func() interface{} { + return &MsgOrderCase{} + }, + }, + { + MsgTypeOrderDirective, // #11 + "Order Directive", + func() interface{} { + return &MsgOrderDirective{} + }, + }, + { + MsgTypeResult, // #12 + "Result", + func() interface{} { + return &MsgResult{} + }, + }, + { + MsgTypeDone, // #13 + "Done", + func() interface{} { + return &MsgDone{} + }, + }, + { + MsgTypeClosing, // #14 + "Closing", + func() interface{} { + return &MsgClosing{} + }, + }, } // String returns the string representation of a MessageType diff --git a/common/stream/client_direct.go b/common/stream/client_direct.go index 1f958fe380350a550991de6aba49d15a08988116..1e9e6eb227103558e3848ec290a6a9a00ef024ef 100644 --- a/common/stream/client_direct.go +++ b/common/stream/client_direct.go @@ -42,7 +42,7 @@ func (c *DirectStreamClient) Send(msg Message) error { } // Instead of using String(), use the numeric value - // /" + // req, err := http.NewRequest("POST", c.options.Target+"/message", bytes.NewBuffer(jsonData)) if err != nil { return err diff --git a/common/stream/msg_mag.go b/common/stream/msg_mag.go new file mode 100644 index 0000000000000000000000000000000000000000..573515142cbefedd05d7d3ef4f6d04aa392297c0 --- /dev/null +++ b/common/stream/msg_mag.go @@ -0,0 +1,200 @@ +/* +Package stream +Copyright (c) 2025 Ginfra Project +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt, if unaltered. You are free to alter and apply any license +you wish to this file. + +Messages for runner control. +*/ +package stream + +import ( + "gitlab.com/ginfra/wwwherd/common/data" +) + +type MsgReady struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` +} + +func (m *MsgReady) GetMType() MessageType { + return MsgTypeReady +} +func (m *MsgReady) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgReady creates a new MsgReady instance +func NewMsgReady(token string) *MsgReady { + return &MsgReady{ + MType: MsgTypeReady, + Token: token, + } +} + +type MsgFault struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` +} + +func (m *MsgFault) GetMType() MessageType { + return MsgTypeFault +} +func (m *MsgFault) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgFault creates a new MsgFault instance +func NewMsgFault(token string) *MsgFault { + return &MsgFault{ + MType: MsgTypeFault, + Token: token, + } +} + +type MsgOrderFlow struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Name string `json:"name"` + Prompt string `json:"prompt"` + FlowId int `json:"flow_id"` +} + +func (m *MsgOrderFlow) GetMType() MessageType { + return MsgTypeOrderFlow +} +func (m *MsgOrderFlow) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgOrderFlow creates a new MsgOrderFLow instance +func NewMsgOrderFlow(token, name, prompt string, flowId int) *MsgOrderFlow { + return &MsgOrderFlow{ + MType: MsgTypeOrderCase, + Token: token, + Name: name, + Prompt: prompt, + FlowId: flowId, + } +} + +type MsgOrderCase struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Name string `json:"name"` + CaseId int `json:"case_id"` +} + +func (m *MsgOrderCase) GetMType() MessageType { + return MsgTypeOrderCase +} +func (m *MsgOrderCase) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgOrderCase creates a new MsgOrderCase instance +func NewMsgOrderCase(token, name string, caseId int) *MsgOrderCase { + return &MsgOrderCase{ + MType: MsgTypeOrderCase, + Token: token, + Name: name, + CaseId: caseId, + } +} + +type MsgOrderDirective struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Type int `json:"type"` + Text string `json:"text"` +} + +func (m *MsgOrderDirective) GetMType() MessageType { + return MsgTypeOrderDirective +} +func (m *MsgOrderDirective) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgOrderDirective creates a new MsgOrderDirective instance +func NewMsgOrderDirective(token, text string, t int) *MsgOrderDirective { + return &MsgOrderDirective{ + MType: MsgTypeOrderDirective, + Token: token, + Type: t, + Text: text, + } +} + +type MsgResult struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Result data.RunStatus `json:"result"` + Info string `json:"info"` + Costs string `json:"costs"` +} + +func (m *MsgResult) GetMType() MessageType { + return MsgTypeResult +} +func (m *MsgResult) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgResult creates a new MsgResult instance +func NewMsgResult(token string, result data.RunStatus, info, costs string) *MsgResult { + return &MsgResult{ + MType: MsgTypeResult, + Token: token, + Result: result, + Info: info, + Costs: costs, + } +} + +type MsgDone struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Log string `json:"log"` +} + +func (m *MsgDone) GetMType() MessageType { + return MsgTypeDone +} +func (m *MsgDone) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgDone creates a new MsgDone instance +func NewMsgDone(token string) *MsgDone { + return &MsgDone{ + MType: MsgTypeDone, + Token: token, + } +} + +type MsgClosing struct { + MType MessageType `json:"mtype"` + Token string `json:"token"` + Msg string `json:"msg"` + Log string `json:"log"` + Bundle string `json:"bundle"` // Bundle URL for future features. +} + +func (m *MsgClosing) GetMType() MessageType { + return MsgTypeClosing +} +func (m *MsgClosing) SetMType() { // But why? To prevent message buffonery. There is a sneaky exploit without it. + m.MType = m.GetMType() +} + +// NewMsgClosing creates a new MsgClosing instance +func NewMsgClosing(token, msg, log, bundle string) *MsgClosing { + return &MsgClosing{ + MType: MsgTypeClosing, + Token: token, + Msg: msg, + Log: log, + Bundle: bundle, + } +} diff --git a/docs/ARCH.md b/docs/ARCH.md new file mode 100644 index 0000000000000000000000000000000000000000..02066f007fa0c654a7083a68fb18afecbef42fdc --- /dev/null +++ b/docs/ARCH.md @@ -0,0 +1,52 @@ +# WWWHerd. Herding the internet. +Copyright (c) 2025 Ginfra Project. All rights reserved. + +*You can use your wwwherds and we will take care of the code.* + +## COMPONENTS + +### API Server + +The api server is found in service/server/. This is the UI and REST api support for the application. It supports +both JTW user authentication and configured token configuration. + +IMPORTANT: The environment variable WWWHERD_JWT_SECRET should be set to a JWT secret key for encrypting the JWT tokens. +The following is the default value: + +``` +export WWWHERD_JWT_SECRET="7ZjeJaw72bM2akfbGUfevK2Y5I1jFMQdvY86tzmGPCRdfq6oMVdKzCr0QCxUr9t" +``` + +The default is find for demos, but it is extremely dangerous to use in any sort of production environment. The all-in-one +container available from the WWWHerd project (i.e. wwwherd/all:latest) will automatically generate a new key the first +time the container is started. + +### Frontend + +The frontend is found in service/src/. + +### Service Node + +The service node is found in service/ and is a ginfra-based service. It handles separable scaling components for the +following services: + +- Database management +- Run job dispatch and monitoring +- Periodic data processes + +### Runner service + +The service node is found in runner/ and is a ginfra-based service. It handles actually running the AI processes/ + +### Postgres node + +It is found in postgres/. Intended to build postgres node containers. It is not required and exists as a convenience. +Scaled postgres instances must be separately architected using known practices that are beyond the scope of our +documentation. + +### Configuration node + +It is found in config/. Intended to build ginfra configuration node containers. It is not required and exists as a +convenience for those that want to use Ginfra for management. + + diff --git a/docs/LOCALDEV.md b/docs/LOCALDEV.md index ea04acbc45e12ffde40a2a20fb416443fafc2717..2b280fceb52327f4585b698dbfcecebcf1b5eb93 100644 --- a/docs/LOCALDEV.md +++ b/docs/LOCALDEV.md @@ -48,5 +48,108 @@ npm install ```shell npx playwright install-deps npx playwright install +``` + +### DEPENDENCIES + +Checkout the WWWHerd repo. I typically put my repos in a /build directory, but you can put it anywhere. (I have a year +of personal tooling that makes this convenient). + +```shell +cd /build +git clone https://gitlab.com/ginfra/wwwherd.git +``` + +You need to have the [Ginfra](https://gitlab.com/ginfra/ginfra) repo cloned the WWWHerd repo. + +```shell +cd wwwherd +git clone https://gitlab.com/ginfra/ginfra.git +``` + +You need a running postgres server + +## FIRST RUN + +I use the Jetbrains IDEs, which is how I will describe running it for development below. The following is a typical +way to run it for the first time. + + +- You will need a POSTGRES server running somewhere, and you will need the postgres user connect URL. The URL will look +something like this: +``` +postgres://postgres:YjkyNDE4ZGJjMjQ1@myaddress.com:5432 +``` + +You can install postgres however you want, but you can get a container for it be cd'ing into the postgres/ directory +and running the following command: ```task docker``` If you use this container, there will be a file called /postgres_url +in the container that will have the url. + +- Copy the file build/service_config.json.NEW to the repo root directory as service_config.json. + +- Copy the file test/config.json to test/working.json + +- Edit test/working.json and replace 'postgres://postgres:<<>>@<<>>:5432' with the url + for your postgres server. + +- From the repo root directory, build and start the agent with the following commands +``` +cd service/ +go build -v -o build/ginfra_service.exe +build/ginfra_service.exe $PWD +``` + +- In another terminal, initialize the database with the following command. +``` +curl -X POST \ + -H "Content-Type: application/json" \ + -H "auth: AAAAAAAAAAAA" \ + -d '{ + "name": "wwwherd" + }' \ + http://localhost:8900/ginfra/datasource/init +``` +You should see a user named 'wwwherd' and a password in the output. + +IMPORTANT NOTE: if you ever switch to a new database (or even just restart the postgres container provided), you MUST +delete the file postgres_user_url in the repo root BEFORE the previous step where you started the server. + + +7- (TODO This step may not be necessary anymore) Edit the file test/working.json and next to "postgres_url" add this entry. +``` +"postgres_user_url": "postgres://wwwherd:<<>>@<<>>:5432/wwwherd" +``` +Where <<>> is the password output from the previous command and <<>> is the host address for +the postgres server. + +8- In the original terminal, kill the build/ginfra_service.exe command. +### FIRST AND SUBSEQUENT RUNS + +You can start these using traditional methods (such as starting the ginfra_service.exe on the command line), but the following +is how most of our development is one. + +- Start the vite for backend UI. In the repo root, run the following command: +```bash +cd service/ +npx vite ``` + +- Start the api server. With the repo root open into [WebStorm](https://www.jetbrains.com/webstorm/), create the following + run configuration and start it. + +![Webstorm](assets/run_webstorm.png) + +- Start the dispatch service. With the repo directory service/ open into [Goland](https://www.jetbrains.com/go/), create + the following run configuration and start it. + +![Service](assets/run_service.png) + +- Start the runner service. With the repo directory runner/ open into [Goland](https://www.jetbrains.com/go/), create + the following run configuration and start it. + +![Runner](assets/run_runner.png) + +The UI will be available on http://localhost:8000. You can login with the username 'admin' and the password 'aaaaaaaa' (eight 'a' characters). + + diff --git a/docs/RESTAPI.md b/docs/RESTAPI.md new file mode 100644 index 0000000000000000000000000000000000000000..9e39ae6888af1c1f3fbe73e3162e4534e0585fb0 --- /dev/null +++ b/docs/RESTAPI.md @@ -0,0 +1,127 @@ +# WWWHerd. Herding the internet. +Copyright (c) 2025 Ginfra Project. All rights reserved. + +*You can use your wwwherds and we will take care of the code.* + +## REST API + +All calls require the header 'auth-token' containing the token added in the UI. All examples assume the environment +variable WWWHERD_REST_TOKEN with the auth-token. + +Response codes: +401 - Bad login information +403 - Bad auth-token +406 - Token has expired. + + +### GET /api/design + +Gets all designs. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/design +``` + +### GET /api/design/:designId + +Get a design with the id as :designId. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/design/1 +``` + +### POST /api/runs + +The json body will have the following fields: +- name +- narrative +- values +- host +- provider_id (int) +- flowids (json int array) +- tag_ids (json int array) + +You can get these values as the output from GET /api/design or GET /api/design/:designId. A successful response will +have the run's id. + +Example. This assumes [the integration test](../integration/demo/wwwherd_test.json) has been imported: +```bash + curl -X POST \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + -d '{"name":"Run Jacks Pet Store ","narrative":"Verify Jacks Pet Store.","values":null,"host":"http://localhost:8888","flowids":[1,2,3],"provider_id":1,"tag_ids":[2,3] }' \ + http://localhost:8000/api/runs +``` + +### GET /api/runs/:runId + +Get information and status about the run with the id as :runId. + +The status values are as follows: +1= New and not yet seen by the system. +2= Waiting for a node. +3= Running +4= Cancelling +5= Canceled +6= Done without result +7= Done and errored +8= Done with a passing result +9= Done and failed by assertions +10= Skipped (unlikely at the run level) +11= Blocked (unlikely at the run level) +12= Disabled (unlikely at the run level) +13= Unknown. This will always be a bug. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/runs/1 +``` + +### GET /api/cancel/:runId + +Cancel the run with the id as :runId. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/cancel/1 +``` + +### GET /api/run-logs/:runId + +Get the logs for the run with the id as :runId. The logs won't be available until the run is done. The data returned +is Base64 encoded. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/run-logs/1 | base64 --decode +``` + +### GET /api/stats/report/:runId + +Get the formatted report for the run with the id as :runId. Particularly useful for test reporting. + +Example: +```bash +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/stats/report/1 +``` + diff --git a/docs/assets/run_runner.png b/docs/assets/run_runner.png new file mode 100644 index 0000000000000000000000000000000000000000..d6bc8f4b3a2c9180d56ab08f08330fad87fca85d Binary files /dev/null and b/docs/assets/run_runner.png differ diff --git a/docs/assets/run_service.png b/docs/assets/run_service.png new file mode 100644 index 0000000000000000000000000000000000000000..fbb6e9991fe21d3103bc44e1fd7c8452ccad02b8 Binary files /dev/null and b/docs/assets/run_service.png differ diff --git a/docs/assets/run_webstorm.png b/docs/assets/run_webstorm.png new file mode 100644 index 0000000000000000000000000000000000000000..9254a10b269f2db6e6c3f67696b48bd47fd2de5b Binary files /dev/null and b/docs/assets/run_webstorm.png differ diff --git a/docs/notes/magservice_interface.txt b/docs/notes/magservice_interface.txt new file mode 100644 index 0000000000000000000000000000000000000000..add0e18918700de2059f307cf4ef4506efd2c191 --- /dev/null +++ b/docs/notes/magservice_interface.txt @@ -0,0 +1,67 @@ +-------------------------------------------------------------------- +FEATURES + +- Put down the groundwork for separating dispatcher from runners. +- All comms are message-based now, so it can be migrated to a real broker instead of point to point (http based). +- Keep point to point for local dev and test, as well as all-in-one containers. +- No more stdin and stdout. Let the runner handle logging. + +-------------------------------------------------------------------- +Open Questions +- Lost processed will be told to fault. But how to find their ghost in the database? + + +-------------------------------------------------------------------- +A. Normal processing + +___ RUNNER ___ ___ DISPATCH SERVICE ___ + + state=Start +m.Ready -----> state=Flow + <----- m.OrderFlow("name0") + +m.Ready -----> state=Case + <----- m.OrderCase("name0", case info) + +m.Ready -----> state=Directive + <----- m.OrderDirective(directive) + +m.Result(result) -----> process(result) + <----- m.OrderDirective(directive) + +m.Result(result) -----> store(result), state=Case + <----- m.OrderCase("name1", case info) + +m.Ready -----> state=Directive + <----- m.OrderDirective(directive) + +m.Result(result) -----> store(result), state=Flow + <----- m.OrderFlow("name1") + +m.Ready -----> state=Case + <----- m.OrderCase("name0", case info) + +m.Ready -----> state=Directive + <----- m.OrderDirective(directive) + +m.Result(result) -----> store(result), state=Done + <----- m.Done + +m.Closing(log) -----> process(result,log) state=Closed +exit() <----- NOP + +-------------------------------------------------------------------- +B. Runner fault + + state=Start +m.Ready -----> state=Flow + <----- m.OrderFlow("name0") + +m.Ready -----> state=Case + <----- m.OrderCase("name0", case info) + +m.Fault(result) -----> store(result), state=Fault + <----- m.Done + +m.Closing(log) -----> process(result,log) state=Closed +exit() <----- NOP \ No newline at end of file diff --git a/docs/notes/rest.txt b/docs/notes/rest.txt new file mode 100644 index 0000000000000000000000000000000000000000..3be834ad9ddfc851a6dd194b8c936be8c42d026f --- /dev/null +++ b/docs/notes/rest.txt @@ -0,0 +1,12 @@ +curl -X GET \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + http://localhost:8000/api/design + + curl -X POST \ + -H "Content-Type: application/json" \ + -H "auth-token: ${WWWHERD_REST_TOKEN}" \ + -d '{ + "name":"Run Jacks Pet Store ","narrative":"Verify Jack's Pet Store.","values":null,"host":"http://localhost:8888","flowids":[1,2,3],"provider_id":1,"tag_ids":[2,3] + }' \ + http://localhost:8000/api/design diff --git a/docs/public/DEPRECATED b/docs/public/DEPRECATED new file mode 100644 index 0000000000000000000000000000000000000000..81fae0f7ff61d369c9bcad93aa94f101cca641c3 --- /dev/null +++ b/docs/public/DEPRECATED @@ -0,0 +1 @@ +This is the old website and will not longer be used. diff --git a/docs/public/demo1.gif b/docs/public/demo1.gif index c2d1954f00263f9a2adc7465e676394044a094f5..32c146b7ba6b05454ded101df274feca03b3dd35 100644 Binary files a/docs/public/demo1.gif and b/docs/public/demo1.gif differ diff --git a/docs/public/demo2.gif b/docs/public/demo2.gif index 8ec7ed4bec4575d3d5955a43c43deadf07934454..703325b5a4df56826dd391d46dfb254a4fc3b375 100644 Binary files a/docs/public/demo2.gif and b/docs/public/demo2.gif differ diff --git a/docs/public/demo3.gif b/docs/public/demo3.gif index 29dc64078f14ddb1ee5b63501919226da9656c95..8b7cfa83d8eb988a60ba0186629850f5f4381ecd 100644 Binary files a/docs/public/demo3.gif and b/docs/public/demo3.gif differ diff --git a/docs/public/index.html b/docs/public/index.html index 93a11a54571b69f38349b86ae3639ad623dd9f9e..cde37da66ec978bba850ab67ed68e55acfb8ccb2 100644 --- a/docs/public/index.html +++ b/docs/public/index.html @@ -104,7 +104,7 @@ font-size: 1.1rem; margin-bottom: 1.5rem; text-align: center; - max-width: 800px; + max-width: 900px; margin-left: auto; margin-right: auto; } @@ -385,7 +385,7 @@

AI Providers

-

Works out of the box with Anthropic's Clause Sonnet 4. Supports many visual based models through OpenRouter, as well +

Works out of the box with Anthropic's Clause Sonnet 4. Supports other visual based models through OpenRouter, as well as self-hosted models through any OpenApi provider.

diff --git a/docs/website/README.md b/docs/website/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3e08ef9b8f9d019c310296da3f37c3b38f3607cb --- /dev/null +++ b/docs/website/README.md @@ -0,0 +1,8 @@ +# WWWHerd website + +### build + +``` +npm install +npx vite build +``` diff --git a/docs/website/index.html b/docs/website/index.html new file mode 100644 index 0000000000000000000000000000000000000000..dacb265c23a122e848fb888dd0cdc84edb95d996 --- /dev/null +++ b/docs/website/index.html @@ -0,0 +1,14 @@ + + + + + + + WWWHerd - Herding the Internet + + + +
+ + + diff --git a/docs/website/package-lock.json b/docs/website/package-lock.json new file mode 100644 index 0000000000000000000000000000000000000000..4c63ce0a4caedda8dd051ca232076288237a024a --- /dev/null +++ b/docs/website/package-lock.json @@ -0,0 +1,1516 @@ +{ + "name": "wwwherdweb", + "version": "0.0.1", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "wwwherdweb", + "version": "0.0.1", + "dependencies": { + "vue": "^3.5.18", + "vue-router": "^4.5.1", + "vuetify": "^3.9.0" + }, + "devDependencies": { + "@mdi/font": "7.4.47", + "@types/node": "^24.0.10", + "@vitejs/plugin-vue": "^6.0.1", + "@vue/tsconfig": "^0.7.0", + "typescript": "~5.8.3", + "vite": "^7.1.2", + "vue-tsc": "^3.0.5" + } + }, + "node_modules/@babel/helper-string-parser": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz", + "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-identifier": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.27.1.tgz", + "integrity": "sha512-D2hP9eA+Sqx1kBZgzxZh0y1trbuU+JoDkiEwqhQ36nodYqJwyEIhPSdMNd7lOm/4io72luTPWH20Yda0xOuUow==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/parser": { + "version": "7.28.4", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.28.4.tgz", + "integrity": "sha512-yZbBqeM6TkpP9du/I2pUZnJsRMGGvOuIrhjzC1AwHwW+6he4mni6Bp/m8ijn0iOuZuPI2BfkCoSRunpyjnrQKg==", + "license": "MIT", + "dependencies": { + "@babel/types": "^7.28.4" + }, + "bin": { + "parser": "bin/babel-parser.js" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@babel/types": { + "version": "7.28.4", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.28.4.tgz", + "integrity": "sha512-bkFqkLhh3pMBUQQkpVgWDWq/lqzc2678eUyDlTBhRqhCHFguYYGM0Efga7tYk4TogG/3x0EEl66/OQ+WGbWB/Q==", + "license": "MIT", + "dependencies": { + "@babel/helper-string-parser": "^7.27.1", + "@babel/helper-validator-identifier": "^7.27.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@esbuild/aix-ppc64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.25.9.tgz", + "integrity": "sha512-OaGtL73Jck6pBKjNIe24BnFE6agGl+6KxDtTfHhy1HmhthfKouEcOhqpSL64K4/0WCtbKFLOdzD/44cJ4k9opA==", + "cpu": [ + "ppc64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "aix" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-arm": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.25.9.tgz", + "integrity": "sha512-5WNI1DaMtxQ7t7B6xa572XMXpHAaI/9Hnhk8lcxF4zVN4xstUgTlvuGDorBguKEnZO70qwEcLpfifMLoxiPqHQ==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.25.9.tgz", + "integrity": "sha512-IDrddSmpSv51ftWslJMvl3Q2ZT98fUSL2/rlUXuVqRXHCs5EUF1/f+jbjF5+NG9UffUDMCiTyh8iec7u8RlTLg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.25.9.tgz", + "integrity": "sha512-I853iMZ1hWZdNllhVZKm34f4wErd4lMyeV7BLzEExGEIZYsOzqDWDf+y082izYUE8gtJnYHdeDpN/6tUdwvfiw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/darwin-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.25.9.tgz", + "integrity": "sha512-XIpIDMAjOELi/9PB30vEbVMs3GV1v2zkkPnuyRRURbhqjyzIINwj+nbQATh4H9GxUgH1kFsEyQMxwiLFKUS6Rg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/darwin-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.25.9.tgz", + "integrity": "sha512-jhHfBzjYTA1IQu8VyrjCX4ApJDnH+ez+IYVEoJHeqJm9VhG9Dh2BYaJritkYK3vMaXrf7Ogr/0MQ8/MeIefsPQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/freebsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.25.9.tgz", + "integrity": "sha512-z93DmbnY6fX9+KdD4Ue/H6sYs+bhFQJNCPZsi4XWJoYblUqT06MQUdBCpcSfuiN72AbqeBFu5LVQTjfXDE2A6Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/freebsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.25.9.tgz", + "integrity": "sha512-mrKX6H/vOyo5v71YfXWJxLVxgy1kyt1MQaD8wZJgJfG4gq4DpQGpgTB74e5yBeQdyMTbgxp0YtNj7NuHN0PoZg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-arm": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.25.9.tgz", + "integrity": "sha512-HBU2Xv78SMgaydBmdor38lg8YDnFKSARg1Q6AT0/y2ezUAKiZvc211RDFHlEZRFNRVhcMamiToo7bDx3VEOYQw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.25.9.tgz", + "integrity": "sha512-BlB7bIcLT3G26urh5Dmse7fiLmLXnRlopw4s8DalgZ8ef79Jj4aUcYbk90g8iCa2467HX8SAIidbL7gsqXHdRw==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-ia32": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.25.9.tgz", + "integrity": "sha512-e7S3MOJPZGp2QW6AK6+Ly81rC7oOSerQ+P8L0ta4FhVi+/j/v2yZzx5CqqDaWjtPFfYz21Vi1S0auHrap3Ma3A==", + "cpu": [ + "ia32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-loong64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.25.9.tgz", + "integrity": "sha512-Sbe10Bnn0oUAB2AalYztvGcK+o6YFFA/9829PhOCUS9vkJElXGdphz0A3DbMdP8gmKkqPmPcMJmJOrI3VYB1JQ==", + "cpu": [ + "loong64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-mips64el": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.25.9.tgz", + "integrity": "sha512-YcM5br0mVyZw2jcQeLIkhWtKPeVfAerES5PvOzaDxVtIyZ2NUBZKNLjC5z3/fUlDgT6w89VsxP2qzNipOaaDyA==", + "cpu": [ + "mips64el" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-ppc64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.25.9.tgz", + "integrity": "sha512-++0HQvasdo20JytyDpFvQtNrEsAgNG2CY1CLMwGXfFTKGBGQT3bOeLSYE2l1fYdvML5KUuwn9Z8L1EWe2tzs1w==", + "cpu": [ + "ppc64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-riscv64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.25.9.tgz", + "integrity": "sha512-uNIBa279Y3fkjV+2cUjx36xkx7eSjb8IvnL01eXUKXez/CBHNRw5ekCGMPM0BcmqBxBcdgUWuUXmVWwm4CH9kg==", + "cpu": [ + "riscv64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-s390x": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.25.9.tgz", + "integrity": "sha512-Mfiphvp3MjC/lctb+7D287Xw1DGzqJPb/J2aHHcHxflUo+8tmN/6d4k6I2yFR7BVo5/g7x2Monq4+Yew0EHRIA==", + "cpu": [ + "s390x" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.25.9.tgz", + "integrity": "sha512-iSwByxzRe48YVkmpbgoxVzn76BXjlYFXC7NvLYq+b+kDjyyk30J0JY47DIn8z1MO3K0oSl9fZoRmZPQI4Hklzg==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/netbsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.25.9.tgz", + "integrity": "sha512-9jNJl6FqaUG+COdQMjSCGW4QiMHH88xWbvZ+kRVblZsWrkXlABuGdFJ1E9L7HK+T0Yqd4akKNa/lO0+jDxQD4Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "netbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/netbsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.25.9.tgz", + "integrity": "sha512-RLLdkflmqRG8KanPGOU7Rpg829ZHu8nFy5Pqdi9U01VYtG9Y0zOG6Vr2z4/S+/3zIyOxiK6cCeYNWOFR9QP87g==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "netbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openbsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.25.9.tgz", + "integrity": "sha512-YaFBlPGeDasft5IIM+CQAhJAqS3St3nJzDEgsgFixcfZeyGPCd6eJBWzke5piZuZ7CtL656eOSYKk4Ls2C0FRQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "openbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openbsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.25.9.tgz", + "integrity": "sha512-1MkgTCuvMGWuqVtAvkpkXFmtL8XhWy+j4jaSO2wxfJtilVCi0ZE37b8uOdMItIHz4I6z1bWWtEX4CJwcKYLcuA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "openbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openharmony-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.25.9.tgz", + "integrity": "sha512-4Xd0xNiMVXKh6Fa7HEJQbrpP3m3DDn43jKxMjxLLRjWnRsfxjORYJlXPO4JNcXtOyfajXorRKY9NkOpTHptErg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "openharmony" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/sunos-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.25.9.tgz", + "integrity": "sha512-WjH4s6hzo00nNezhp3wFIAfmGZ8U7KtrJNlFMRKxiI9mxEK1scOMAaa9i4crUtu+tBr+0IN6JCuAcSBJZfnphw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "sunos" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.25.9.tgz", + "integrity": "sha512-mGFrVJHmZiRqmP8xFOc6b84/7xa5y5YvR1x8djzXpJBSv/UsNK6aqec+6JDjConTgvvQefdGhFDAs2DLAds6gQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-ia32": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.25.9.tgz", + "integrity": "sha512-b33gLVU2k11nVx1OhX3C8QQP6UHQK4ZtN56oFWvVXvz2VkDoe6fbG8TOgHFxEvqeqohmRnIHe5A1+HADk4OQww==", + "cpu": [ + "ia32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.25.9.tgz", + "integrity": "sha512-PPOl1mi6lpLNQxnGoyAfschAodRFYXJ+9fs6WHXz7CSWKbOqiMZsubC+BQsVKuul+3vKLuwTHsS2c2y9EoKwxQ==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.5", + "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz", + "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==", + "license": "MIT" + }, + "node_modules/@mdi/font": { + "version": "7.4.47", + "resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz", + "integrity": "sha512-43MtGpd585SNzHZPcYowu/84Vz2a2g31TvPMTm9uTiCSWzaheQySUcSyUH/46fPnuPQWof2yd0pGBtzee/IQWw==", + "dev": true, + "license": "Apache-2.0" + }, + "node_modules/@rolldown/pluginutils": { + "version": "1.0.0-beta.29", + "resolved": "https://registry.npmjs.org/@rolldown/pluginutils/-/pluginutils-1.0.0-beta.29.tgz", + "integrity": "sha512-NIJgOsMjbxAXvoGq/X0gD7VPMQ8j9g0BiDaNjVNVjvl+iKXxL3Jre0v31RmBYeLEmkbj2s02v8vFTbUXi5XS2Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/@rollup/rollup-android-arm-eabi": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.50.2.tgz", + "integrity": "sha512-uLN8NAiFVIRKX9ZQha8wy6UUs06UNSZ32xj6giK/rmMXAgKahwExvK6SsmgU5/brh4w/nSgj8e0k3c1HBQpa0A==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@rollup/rollup-android-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.50.2.tgz", + "integrity": "sha512-oEouqQk2/zxxj22PNcGSskya+3kV0ZKH+nQxuCCOGJ4oTXBdNTbv+f/E3c74cNLeMO1S5wVWacSws10TTSB77g==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@rollup/rollup-darwin-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.50.2.tgz", + "integrity": "sha512-OZuTVTpj3CDSIxmPgGH8en/XtirV5nfljHZ3wrNwvgkT5DQLhIKAeuFSiwtbMto6oVexV0k1F1zqURPKf5rI1Q==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@rollup/rollup-darwin-x64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.50.2.tgz", + "integrity": "sha512-Wa/Wn8RFkIkr1vy1k1PB//VYhLnlnn5eaJkfTQKivirOvzu5uVd2It01ukeQstMursuz7S1bU+8WW+1UPXpa8A==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@rollup/rollup-freebsd-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.50.2.tgz", + "integrity": "sha512-QkzxvH3kYN9J1w7D1A+yIMdI1pPekD+pWx7G5rXgnIlQ1TVYVC6hLl7SOV9pi5q9uIDF9AuIGkuzcbF7+fAhow==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ] + }, + "node_modules/@rollup/rollup-freebsd-x64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.50.2.tgz", + "integrity": "sha512-dkYXB0c2XAS3a3jmyDkX4Jk0m7gWLFzq1C3qUnJJ38AyxIF5G/dyS4N9B30nvFseCfgtCEdbYFhk0ChoCGxPog==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ] + }, + "node_modules/@rollup/rollup-linux-arm-gnueabihf": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.50.2.tgz", + "integrity": "sha512-9VlPY/BN3AgbukfVHAB8zNFWB/lKEuvzRo1NKev0Po8sYFKx0i+AQlCYftgEjcL43F2h9Ui1ZSdVBc4En/sP2w==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm-musleabihf": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.50.2.tgz", + "integrity": "sha512-+GdKWOvsifaYNlIVf07QYan1J5F141+vGm5/Y8b9uCZnG/nxoGqgCmR24mv0koIWWuqvFYnbURRqw1lv7IBINw==", + "cpu": [ + "arm" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.50.2.tgz", + "integrity": "sha512-df0Eou14ojtUdLQdPFnymEQteENwSJAdLf5KCDrmZNsy1c3YaCNaJvYsEUHnrg+/DLBH612/R0xd3dD03uz2dg==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.50.2.tgz", + "integrity": "sha512-iPeouV0UIDtz8j1YFR4OJ/zf7evjauqv7jQ/EFs0ClIyL+by++hiaDAfFipjOgyz6y6xbDvJuiU4HwpVMpRFDQ==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-loong64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.50.2.tgz", + "integrity": "sha512-OL6KaNvBopLlj5fTa5D5bau4W82f+1TyTZRr2BdnfsrnQnmdxh4okMxR2DcDkJuh4KeoQZVuvHvzuD/lyLn2Kw==", + "cpu": [ + "loong64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-ppc64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.50.2.tgz", + "integrity": "sha512-I21VJl1w6z/K5OTRl6aS9DDsqezEZ/yKpbqlvfHbW0CEF5IL8ATBMuUx6/mp683rKTK8thjs/0BaNrZLXetLag==", + "cpu": [ + "ppc64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-riscv64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.50.2.tgz", + "integrity": "sha512-Hq6aQJT/qFFHrYMjS20nV+9SKrXL2lvFBENZoKfoTH2kKDOJqff5OSJr4x72ZaG/uUn+XmBnGhfr4lwMRrmqCQ==", + "cpu": [ + "riscv64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-riscv64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.50.2.tgz", + "integrity": "sha512-82rBSEXRv5qtKyr0xZ/YMF531oj2AIpLZkeNYxmKNN6I2sVE9PGegN99tYDLK2fYHJITL1P2Lgb4ZXnv0PjQvw==", + "cpu": [ + "riscv64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-s390x-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.50.2.tgz", + "integrity": "sha512-4Q3S3Hy7pC6uaRo9gtXUTJ+EKo9AKs3BXKc2jYypEcMQ49gDPFU2P1ariX9SEtBzE5egIX6fSUmbmGazwBVF9w==", + "cpu": [ + "s390x" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-x64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.50.2.tgz", + "integrity": "sha512-9Jie/At6qk70dNIcopcL4p+1UirusEtznpNtcq/u/C5cC4HBX7qSGsYIcG6bdxj15EYWhHiu02YvmdPzylIZlA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-x64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.50.2.tgz", + "integrity": "sha512-HPNJwxPL3EmhzeAnsWQCM3DcoqOz3/IC6de9rWfGR8ZCuEHETi9km66bH/wG3YH0V3nyzyFEGUZeL5PKyy4xvw==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-openharmony-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.50.2.tgz", + "integrity": "sha512-nMKvq6FRHSzYfKLHZ+cChowlEkR2lj/V0jYj9JnGUVPL2/mIeFGmVM2mLaFeNa5Jev7W7TovXqXIG2d39y1KYA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "openharmony" + ] + }, + "node_modules/@rollup/rollup-win32-arm64-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.50.2.tgz", + "integrity": "sha512-eFUvvnTYEKeTyHEijQKz81bLrUQOXKZqECeiWH6tb8eXXbZk+CXSG2aFrig2BQ/pjiVRj36zysjgILkqarS2YA==", + "cpu": [ + "arm64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@rollup/rollup-win32-ia32-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.50.2.tgz", + "integrity": "sha512-cBaWmXqyfRhH8zmUxK3d3sAhEWLrtMjWBRwdMMHJIXSjvjLKvv49adxiEz+FJ8AP90apSDDBx2Tyd/WylV6ikA==", + "cpu": [ + "ia32" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@rollup/rollup-win32-x64-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.50.2.tgz", + "integrity": "sha512-APwKy6YUhvZaEoHyM+9xqmTpviEI+9eL7LoCH+aLcvWYHJ663qG5zx7WzWZY+a9qkg5JtzcMyJ9z0WtQBMDmgA==", + "cpu": [ + "x64" + ], + "dev": true, + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "dev": true, + "license": "MIT" + }, + "node_modules/@types/node": { + "version": "24.4.0", + "resolved": "https://registry.npmjs.org/@types/node/-/node-24.4.0.tgz", + "integrity": "sha512-gUuVEAK4/u6F9wRLznPUU4WGUacSEBDPoC2TrBkw3GAnOLHBL45QdfHOXp1kJ4ypBGLxTOB+t7NJLpKoC3gznQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "undici-types": "~7.11.0" + } + }, + "node_modules/@vitejs/plugin-vue": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/@vitejs/plugin-vue/-/plugin-vue-6.0.1.tgz", + "integrity": "sha512-+MaE752hU0wfPFJEUAIxqw18+20euHHdxVtMvbFcOEpjEyfqXH/5DCoTHiVJ0J29EhTJdoTkjEv5YBKU9dnoTw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@rolldown/pluginutils": "1.0.0-beta.29" + }, + "engines": { + "node": "^20.19.0 || >=22.12.0" + }, + "peerDependencies": { + "vite": "^5.0.0 || ^6.0.0 || ^7.0.0", + "vue": "^3.2.25" + } + }, + "node_modules/@volar/language-core": { + "version": "2.4.23", + "resolved": "https://registry.npmjs.org/@volar/language-core/-/language-core-2.4.23.tgz", + "integrity": "sha512-hEEd5ET/oSmBC6pi1j6NaNYRWoAiDhINbT8rmwtINugR39loROSlufGdYMF9TaKGfz+ViGs1Idi3mAhnuPcoGQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@volar/source-map": "2.4.23" + } + }, + "node_modules/@volar/source-map": { + "version": "2.4.23", + "resolved": "https://registry.npmjs.org/@volar/source-map/-/source-map-2.4.23.tgz", + "integrity": "sha512-Z1Uc8IB57Lm6k7q6KIDu/p+JWtf3xsXJqAX/5r18hYOTpJyBn0KXUR8oTJ4WFYOcDzWC9n3IflGgHowx6U6z9Q==", + "dev": true, + "license": "MIT" + }, + "node_modules/@volar/typescript": { + "version": "2.4.23", + "resolved": "https://registry.npmjs.org/@volar/typescript/-/typescript-2.4.23.tgz", + "integrity": "sha512-lAB5zJghWxVPqfcStmAP1ZqQacMpe90UrP5RJ3arDyrhy4aCUQqmxPPLB2PWDKugvylmO41ljK7vZ+t6INMTag==", + "dev": true, + "license": "MIT", + "dependencies": { + "@volar/language-core": "2.4.23", + "path-browserify": "^1.0.1", + "vscode-uri": "^3.0.8" + } + }, + "node_modules/@vue/compiler-core": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/compiler-core/-/compiler-core-3.5.21.tgz", + "integrity": "sha512-8i+LZ0vf6ZgII5Z9XmUvrCyEzocvWT+TeR2VBUVlzIH6Tyv57E20mPZ1bCS+tbejgUgmjrEh7q/0F0bibskAmw==", + "license": "MIT", + "dependencies": { + "@babel/parser": "^7.28.3", + "@vue/shared": "3.5.21", + "entities": "^4.5.0", + "estree-walker": "^2.0.2", + "source-map-js": "^1.2.1" + } + }, + "node_modules/@vue/compiler-dom": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/compiler-dom/-/compiler-dom-3.5.21.tgz", + "integrity": "sha512-jNtbu/u97wiyEBJlJ9kmdw7tAr5Vy0Aj5CgQmo+6pxWNQhXZDPsRr1UWPN4v3Zf82s2H3kF51IbzZ4jMWAgPlQ==", + "license": "MIT", + "dependencies": { + "@vue/compiler-core": "3.5.21", + "@vue/shared": "3.5.21" + } + }, + "node_modules/@vue/compiler-sfc": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/compiler-sfc/-/compiler-sfc-3.5.21.tgz", + "integrity": "sha512-SXlyk6I5eUGBd2v8Ie7tF6ADHE9kCR6mBEuPyH1nUZ0h6Xx6nZI29i12sJKQmzbDyr2tUHMhhTt51Z6blbkTTQ==", + "license": "MIT", + "dependencies": { + "@babel/parser": "^7.28.3", + "@vue/compiler-core": "3.5.21", + "@vue/compiler-dom": "3.5.21", + "@vue/compiler-ssr": "3.5.21", + "@vue/shared": "3.5.21", + "estree-walker": "^2.0.2", + "magic-string": "^0.30.18", + "postcss": "^8.5.6", + "source-map-js": "^1.2.1" + } + }, + "node_modules/@vue/compiler-ssr": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/compiler-ssr/-/compiler-ssr-3.5.21.tgz", + "integrity": "sha512-vKQ5olH5edFZdf5ZrlEgSO1j1DMA4u23TVK5XR1uMhvwnYvVdDF0nHXJUblL/GvzlShQbjhZZ2uvYmDlAbgo9w==", + "license": "MIT", + "dependencies": { + "@vue/compiler-dom": "3.5.21", + "@vue/shared": "3.5.21" + } + }, + "node_modules/@vue/compiler-vue2": { + "version": "2.7.16", + "resolved": "https://registry.npmjs.org/@vue/compiler-vue2/-/compiler-vue2-2.7.16.tgz", + "integrity": "sha512-qYC3Psj9S/mfu9uVi5WvNZIzq+xnXMhOwbTFKKDD7b1lhpnn71jXSFdTQ+WsIEk0ONCd7VV2IMm7ONl6tbQ86A==", + "dev": true, + "license": "MIT", + "dependencies": { + "de-indent": "^1.0.2", + "he": "^1.2.0" + } + }, + "node_modules/@vue/devtools-api": { + "version": "6.6.4", + "resolved": "https://registry.npmjs.org/@vue/devtools-api/-/devtools-api-6.6.4.tgz", + "integrity": "sha512-sGhTPMuXqZ1rVOk32RylztWkfXTRhuS7vgAKv0zjqk8gbsHkJ7xfFf+jbySxt7tWObEJwyKaHMikV/WGDiQm8g==", + "license": "MIT" + }, + "node_modules/@vue/language-core": { + "version": "3.0.7", + "resolved": "https://registry.npmjs.org/@vue/language-core/-/language-core-3.0.7.tgz", + "integrity": "sha512-0sqqyqJ0Gn33JH3TdIsZLCZZ8Gr4kwlg8iYOnOrDDkJKSjFurlQY/bEFQx5zs7SX2C/bjMkmPYq/NiyY1fTOkw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@volar/language-core": "2.4.23", + "@vue/compiler-dom": "^3.5.0", + "@vue/compiler-vue2": "^2.7.16", + "@vue/shared": "^3.5.0", + "alien-signals": "^2.0.5", + "muggle-string": "^0.4.1", + "path-browserify": "^1.0.1", + "picomatch": "^4.0.2" + }, + "peerDependencies": { + "typescript": "*" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/@vue/reactivity": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.5.21.tgz", + "integrity": "sha512-3ah7sa+Cwr9iiYEERt9JfZKPw4A2UlbY8RbbnH2mGCE8NwHkhmlZt2VsH0oDA3P08X3jJd29ohBDtX+TbD9AsA==", + "license": "MIT", + "dependencies": { + "@vue/shared": "3.5.21" + } + }, + "node_modules/@vue/runtime-core": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/runtime-core/-/runtime-core-3.5.21.tgz", + "integrity": "sha512-+DplQlRS4MXfIf9gfD1BOJpk5RSyGgGXD/R+cumhe8jdjUcq/qlxDawQlSI8hCKupBlvM+3eS1se5xW+SuNAwA==", + "license": "MIT", + "dependencies": { + "@vue/reactivity": "3.5.21", + "@vue/shared": "3.5.21" + } + }, + "node_modules/@vue/runtime-dom": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/runtime-dom/-/runtime-dom-3.5.21.tgz", + "integrity": "sha512-3M2DZsOFwM5qI15wrMmNF5RJe1+ARijt2HM3TbzBbPSuBHOQpoidE+Pa+XEaVN+czbHf81ETRoG1ltztP2em8w==", + "license": "MIT", + "dependencies": { + "@vue/reactivity": "3.5.21", + "@vue/runtime-core": "3.5.21", + "@vue/shared": "3.5.21", + "csstype": "^3.1.3" + } + }, + "node_modules/@vue/server-renderer": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/server-renderer/-/server-renderer-3.5.21.tgz", + "integrity": "sha512-qr8AqgD3DJPJcGvLcJKQo2tAc8OnXRcfxhOJCPF+fcfn5bBGz7VCcO7t+qETOPxpWK1mgysXvVT/j+xWaHeMWA==", + "license": "MIT", + "dependencies": { + "@vue/compiler-ssr": "3.5.21", + "@vue/shared": "3.5.21" + }, + "peerDependencies": { + "vue": "3.5.21" + } + }, + "node_modules/@vue/shared": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.5.21.tgz", + "integrity": "sha512-+2k1EQpnYuVuu3N7atWyG3/xoFWIVJZq4Mz8XNOdScFI0etES75fbny/oU4lKWk/577P1zmg0ioYvpGEDZ3DLw==", + "license": "MIT" + }, + "node_modules/@vue/tsconfig": { + "version": "0.7.0", + "resolved": "https://registry.npmjs.org/@vue/tsconfig/-/tsconfig-0.7.0.tgz", + "integrity": "sha512-ku2uNz5MaZ9IerPPUyOHzyjhXoX2kVJaVf7hL315DC17vS6IiZRmmCPfggNbU16QTvM80+uYYy3eYJB59WCtvg==", + "dev": true, + "license": "MIT", + "peerDependencies": { + "typescript": "5.x", + "vue": "^3.4.0" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + }, + "vue": { + "optional": true + } + } + }, + "node_modules/alien-signals": { + "version": "2.0.7", + "resolved": "https://registry.npmjs.org/alien-signals/-/alien-signals-2.0.7.tgz", + "integrity": "sha512-wE7y3jmYeb0+h6mr5BOovuqhFv22O/MV9j5p0ndJsa7z1zJNPGQ4ph5pQk/kTTCWRC3xsA4SmtwmkzQO+7NCNg==", + "dev": true, + "license": "MIT" + }, + "node_modules/csstype": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.1.3.tgz", + "integrity": "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==", + "license": "MIT" + }, + "node_modules/de-indent": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/de-indent/-/de-indent-1.0.2.tgz", + "integrity": "sha512-e/1zu3xH5MQryN2zdVaF0OrdNLUbvWxzMbi+iNA6Bky7l1RoP8a2fIbRocyHclXt/arDrrR6lL3TqFD9pMQTsg==", + "dev": true, + "license": "MIT" + }, + "node_modules/entities": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/esbuild": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.9.tgz", + "integrity": "sha512-CRbODhYyQx3qp7ZEwzxOk4JBqmD/seJrzPa/cGjY1VtIn5E09Oi9/dB4JwctnfZ8Q8iT7rioVv5k/FNT/uf54g==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "bin": { + "esbuild": "bin/esbuild" + }, + "engines": { + "node": ">=18" + }, + "optionalDependencies": { + "@esbuild/aix-ppc64": "0.25.9", + "@esbuild/android-arm": "0.25.9", + "@esbuild/android-arm64": "0.25.9", + "@esbuild/android-x64": "0.25.9", + "@esbuild/darwin-arm64": "0.25.9", + "@esbuild/darwin-x64": "0.25.9", + "@esbuild/freebsd-arm64": "0.25.9", + "@esbuild/freebsd-x64": "0.25.9", + "@esbuild/linux-arm": "0.25.9", + "@esbuild/linux-arm64": "0.25.9", + "@esbuild/linux-ia32": "0.25.9", + "@esbuild/linux-loong64": "0.25.9", + "@esbuild/linux-mips64el": "0.25.9", + "@esbuild/linux-ppc64": "0.25.9", + "@esbuild/linux-riscv64": "0.25.9", + "@esbuild/linux-s390x": "0.25.9", + "@esbuild/linux-x64": "0.25.9", + "@esbuild/netbsd-arm64": "0.25.9", + "@esbuild/netbsd-x64": "0.25.9", + "@esbuild/openbsd-arm64": "0.25.9", + "@esbuild/openbsd-x64": "0.25.9", + "@esbuild/openharmony-arm64": "0.25.9", + "@esbuild/sunos-x64": "0.25.9", + "@esbuild/win32-arm64": "0.25.9", + "@esbuild/win32-ia32": "0.25.9", + "@esbuild/win32-x64": "0.25.9" + } + }, + "node_modules/estree-walker": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz", + "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", + "license": "MIT" + }, + "node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/fsevents": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz", + "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==", + "dev": true, + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^8.16.0 || ^10.6.0 || >=11.0.0" + } + }, + "node_modules/he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "dev": true, + "license": "MIT", + "bin": { + "he": "bin/he" + } + }, + "node_modules/magic-string": { + "version": "0.30.19", + "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.19.tgz", + "integrity": "sha512-2N21sPY9Ws53PZvsEpVtNuSW+ScYbQdp4b9qUaL+9QkHUrGFKo56Lg9Emg5s9V/qrtNBmiR01sYhUOwu3H+VOw==", + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.5.5" + } + }, + "node_modules/muggle-string": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/muggle-string/-/muggle-string-0.4.1.tgz", + "integrity": "sha512-VNTrAak/KhO2i8dqqnqnAHOa3cYBwXEZe9h+D5h/1ZqFSTEFHdM65lR7RoIqq3tBBYavsOXV84NoHXZ0AkPyqQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/nanoid": { + "version": "3.3.11", + "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.11.tgz", + "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "bin": { + "nanoid": "bin/nanoid.cjs" + }, + "engines": { + "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1" + } + }, + "node_modules/path-browserify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-1.0.1.tgz", + "integrity": "sha512-b7uo2UCUOYZcnF/3ID0lulOJi/bafxa1xPe7ZPsammBSpjSWQkjNxlt635YGS2MiR9GjvuXCtz2emr3jbsz98g==", + "dev": true, + "license": "MIT" + }, + "node_modules/picocolors": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", + "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", + "dev": true, + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/postcss": { + "version": "8.5.6", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz", + "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "nanoid": "^3.3.11", + "picocolors": "^1.1.1", + "source-map-js": "^1.2.1" + }, + "engines": { + "node": "^10 || ^12 || >=14" + } + }, + "node_modules/rollup": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.50.2.tgz", + "integrity": "sha512-BgLRGy7tNS9H66aIMASq1qSYbAAJV6Z6WR4QYTvj5FgF15rZ/ympT1uixHXwzbZUBDbkvqUI1KR0fH1FhMaQ9w==", + "dev": true, + "license": "MIT", + "dependencies": { + "@types/estree": "1.0.8" + }, + "bin": { + "rollup": "dist/bin/rollup" + }, + "engines": { + "node": ">=18.0.0", + "npm": ">=8.0.0" + }, + "optionalDependencies": { + "@rollup/rollup-android-arm-eabi": "4.50.2", + "@rollup/rollup-android-arm64": "4.50.2", + "@rollup/rollup-darwin-arm64": "4.50.2", + "@rollup/rollup-darwin-x64": "4.50.2", + "@rollup/rollup-freebsd-arm64": "4.50.2", + "@rollup/rollup-freebsd-x64": "4.50.2", + "@rollup/rollup-linux-arm-gnueabihf": "4.50.2", + "@rollup/rollup-linux-arm-musleabihf": "4.50.2", + "@rollup/rollup-linux-arm64-gnu": "4.50.2", + "@rollup/rollup-linux-arm64-musl": "4.50.2", + "@rollup/rollup-linux-loong64-gnu": "4.50.2", + "@rollup/rollup-linux-ppc64-gnu": "4.50.2", + "@rollup/rollup-linux-riscv64-gnu": "4.50.2", + "@rollup/rollup-linux-riscv64-musl": "4.50.2", + "@rollup/rollup-linux-s390x-gnu": "4.50.2", + "@rollup/rollup-linux-x64-gnu": "4.50.2", + "@rollup/rollup-linux-x64-musl": "4.50.2", + "@rollup/rollup-openharmony-arm64": "4.50.2", + "@rollup/rollup-win32-arm64-msvc": "4.50.2", + "@rollup/rollup-win32-ia32-msvc": "4.50.2", + "@rollup/rollup-win32-x64-msvc": "4.50.2", + "fsevents": "~2.3.2" + } + }, + "node_modules/source-map-js": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", + "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/tinyglobby": { + "version": "0.2.15", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", + "integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "fdir": "^6.5.0", + "picomatch": "^4.0.3" + }, + "engines": { + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" + } + }, + "node_modules/typescript": { + "version": "5.8.3", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.3.tgz", + "integrity": "sha512-p1diW6TqL9L07nNxvRMM7hMMw4c5XOo/1ibL4aAIGmSAt9slTE1Xgw5KWuof2uTOvCg9BY7ZRi+GaF+7sfgPeQ==", + "devOptional": true, + "license": "Apache-2.0", + "bin": { + "tsc": "bin/tsc", + "tsserver": "bin/tsserver" + }, + "engines": { + "node": ">=14.17" + } + }, + "node_modules/undici-types": { + "version": "7.11.0", + "resolved": "https://registry.npmjs.org/undici-types/-/undici-types-7.11.0.tgz", + "integrity": "sha512-kt1ZriHTi7MU+Z/r9DOdAI3ONdaR3M3csEaRc6ewa4f4dTvX4cQCbJ4NkEn0ohE4hHtq85+PhPSTY+pO/1PwgA==", + "dev": true, + "license": "MIT" + }, + "node_modules/vite": { + "version": "7.1.5", + "resolved": "https://registry.npmjs.org/vite/-/vite-7.1.5.tgz", + "integrity": "sha512-4cKBO9wR75r0BeIWWWId9XK9Lj6La5X846Zw9dFfzMRw38IlTk2iCcUt6hsyiDRcPidc55ZParFYDXi0nXOeLQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "esbuild": "^0.25.0", + "fdir": "^6.5.0", + "picomatch": "^4.0.3", + "postcss": "^8.5.6", + "rollup": "^4.43.0", + "tinyglobby": "^0.2.15" + }, + "bin": { + "vite": "bin/vite.js" + }, + "engines": { + "node": "^20.19.0 || >=22.12.0" + }, + "funding": { + "url": "https://github.com/vitejs/vite?sponsor=1" + }, + "optionalDependencies": { + "fsevents": "~2.3.3" + }, + "peerDependencies": { + "@types/node": "^20.19.0 || >=22.12.0", + "jiti": ">=1.21.0", + "less": "^4.0.0", + "lightningcss": "^1.21.0", + "sass": "^1.70.0", + "sass-embedded": "^1.70.0", + "stylus": ">=0.54.8", + "sugarss": "^5.0.0", + "terser": "^5.16.0", + "tsx": "^4.8.1", + "yaml": "^2.4.2" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + }, + "jiti": { + "optional": true + }, + "less": { + "optional": true + }, + "lightningcss": { + "optional": true + }, + "sass": { + "optional": true + }, + "sass-embedded": { + "optional": true + }, + "stylus": { + "optional": true + }, + "sugarss": { + "optional": true + }, + "terser": { + "optional": true + }, + "tsx": { + "optional": true + }, + "yaml": { + "optional": true + } + } + }, + "node_modules/vscode-uri": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/vscode-uri/-/vscode-uri-3.1.0.tgz", + "integrity": "sha512-/BpdSx+yCQGnCvecbyXdxHDkuk55/G3xwnC0GqY4gmQ3j+A+g8kzzgB4Nk/SINjqn6+waqw3EgbVF2QKExkRxQ==", + "dev": true, + "license": "MIT" + }, + "node_modules/vue": { + "version": "3.5.21", + "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.21.tgz", + "integrity": "sha512-xxf9rum9KtOdwdRkiApWL+9hZEMWE90FHh8yS1+KJAiWYh+iGWV1FquPjoO9VUHQ+VIhsCXNNyZ5Sf4++RVZBA==", + "license": "MIT", + "dependencies": { + "@vue/compiler-dom": "3.5.21", + "@vue/compiler-sfc": "3.5.21", + "@vue/runtime-dom": "3.5.21", + "@vue/server-renderer": "3.5.21", + "@vue/shared": "3.5.21" + }, + "peerDependencies": { + "typescript": "*" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/vue-router": { + "version": "4.5.1", + "resolved": "https://registry.npmjs.org/vue-router/-/vue-router-4.5.1.tgz", + "integrity": "sha512-ogAF3P97NPm8fJsE4by9dwSYtDwXIY1nFY9T6DyQnGHd1E2Da94w9JIolpe42LJGIl0DwOHBi8TcRPlPGwbTtw==", + "license": "MIT", + "dependencies": { + "@vue/devtools-api": "^6.6.4" + }, + "funding": { + "url": "https://github.com/sponsors/posva" + }, + "peerDependencies": { + "vue": "^3.2.0" + } + }, + "node_modules/vue-tsc": { + "version": "3.0.7", + "resolved": "https://registry.npmjs.org/vue-tsc/-/vue-tsc-3.0.7.tgz", + "integrity": "sha512-BSMmW8GGEgHykrv7mRk6zfTdK+tw4MBZY/x6fFa7IkdXK3s/8hQRacPjG9/8YKFDIWGhBocwi6PlkQQ/93OgIQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@volar/typescript": "2.4.23", + "@vue/language-core": "3.0.7" + }, + "bin": { + "vue-tsc": "bin/vue-tsc.js" + }, + "peerDependencies": { + "typescript": ">=5.0.0" + } + }, + "node_modules/vuetify": { + "version": "3.10.0", + "resolved": "https://registry.npmjs.org/vuetify/-/vuetify-3.10.0.tgz", + "integrity": "sha512-cgtssO0yriqwEdOd6jGfUqUUztunxYPDhyY+iog0Q8i5WEa+3+eQ/dGpXeFoU80qMVm0k6uPd8aJpc5wEVXu3g==", + "license": "MIT", + "engines": { + "node": "^12.20 || >=14.13" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/johnleider" + }, + "peerDependencies": { + "typescript": ">=4.7", + "vite-plugin-vuetify": ">=2.1.0", + "vue": "^3.5.0", + "webpack-plugin-vuetify": ">=3.1.0" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + }, + "vite-plugin-vuetify": { + "optional": true + }, + "webpack-plugin-vuetify": { + "optional": true + } + } + } + } +} diff --git a/docs/website/package.json b/docs/website/package.json new file mode 100644 index 0000000000000000000000000000000000000000..ad6dcc4580f6e2fb6ee87457bb61bb0bd7e4a185 --- /dev/null +++ b/docs/website/package.json @@ -0,0 +1,25 @@ +{ + "name": "wwwherdweb", + "private": true, + "version": "0.0.1", + "type": "module", + "scripts": { + "dev": "vite", + "build": "vue-tsc -b && vite build", + "preview": "vite preview" + }, + "dependencies": { + "vue": "^3.5.18", + "vue-router": "^4.5.1", + "vuetify": "^3.9.0" + }, + "devDependencies": { + "@mdi/font": "7.4.47", + "@types/node": "^24.0.10", + "@vitejs/plugin-vue": "^6.0.1", + "@vue/tsconfig": "^0.7.0", + "typescript": "~5.8.3", + "vite": "^7.1.2", + "vue-tsc": "^3.0.5" + } +} diff --git a/docs/website/public/favicon.ico b/docs/website/public/favicon.ico new file mode 100644 index 0000000000000000000000000000000000000000..8b37e31d6c7952f6f8816a114fab10686078f911 Binary files /dev/null and b/docs/website/public/favicon.ico differ diff --git a/docs/website/src/App.vue b/docs/website/src/App.vue new file mode 100644 index 0000000000000000000000000000000000000000..9c5660088e6be89e127ef56083be18ecc083ecf5 --- /dev/null +++ b/docs/website/src/App.vue @@ -0,0 +1,26 @@ + + + + + diff --git a/docs/website/src/assets/common.css b/docs/website/src/assets/common.css new file mode 100644 index 0000000000000000000000000000000000000000..95e07db6a5e144f18806026efb630eddef950730 --- /dev/null +++ b/docs/website/src/assets/common.css @@ -0,0 +1,17 @@ + +.top-button { + width: 120px; +} + +.sidebar { + max-width: 9% !important; + flex: 0 0 9% !important; +} + +.fill-height { + height: 100%; +} + +.border-b { + border-bottom: 1px solid rgba(0, 0, 0, 0.12); +} diff --git a/docs/website/src/assets/demo1.gif b/docs/website/src/assets/demo1.gif new file mode 100644 index 0000000000000000000000000000000000000000..32c146b7ba6b05454ded101df274feca03b3dd35 Binary files /dev/null and b/docs/website/src/assets/demo1.gif differ diff --git a/docs/website/src/assets/demo2.gif b/docs/website/src/assets/demo2.gif new file mode 100644 index 0000000000000000000000000000000000000000..703325b5a4df56826dd391d46dfb254a4fc3b375 Binary files /dev/null and b/docs/website/src/assets/demo2.gif differ diff --git a/docs/website/src/assets/demo3.gif b/docs/website/src/assets/demo3.gif new file mode 100644 index 0000000000000000000000000000000000000000..8b7cfa83d8eb988a60ba0186629850f5f4381ecd Binary files /dev/null and b/docs/website/src/assets/demo3.gif differ diff --git a/docs/website/src/assets/tour/tour_add_bug.webm b/docs/website/src/assets/tour/tour_add_bug.webm new file mode 100644 index 0000000000000000000000000000000000000000..11b1adc850c70f7358a21ab44754ce2df492b870 Binary files /dev/null and b/docs/website/src/assets/tour/tour_add_bug.webm differ diff --git a/docs/website/src/assets/tour/tour_case_2_design.webm b/docs/website/src/assets/tour/tour_case_2_design.webm new file mode 100644 index 0000000000000000000000000000000000000000..3d32f75e357d4547dd6b81419da028a1670d1698 Binary files /dev/null and b/docs/website/src/assets/tour/tour_case_2_design.webm differ diff --git a/docs/website/src/assets/tour/tour_case_history.webm b/docs/website/src/assets/tour/tour_case_history.webm new file mode 100644 index 0000000000000000000000000000000000000000..08510e4e357cb645f560c83317965b75f80fb486 Binary files /dev/null and b/docs/website/src/assets/tour/tour_case_history.webm differ diff --git a/docs/website/src/assets/tour/tour_costs_chart.webm b/docs/website/src/assets/tour/tour_costs_chart.webm new file mode 100644 index 0000000000000000000000000000000000000000..be7d926662c78233b3b9ce2eac4888a517ade926 Binary files /dev/null and b/docs/website/src/assets/tour/tour_costs_chart.webm differ diff --git a/docs/website/src/assets/tour/tour_done_run.webm b/docs/website/src/assets/tour/tour_done_run.webm new file mode 100644 index 0000000000000000000000000000000000000000..6ddd6efefbf5a0f121becfbc8f8e452731082b1d Binary files /dev/null and b/docs/website/src/assets/tour/tour_done_run.webm differ diff --git a/docs/website/src/assets/tour/tour_export.webm b/docs/website/src/assets/tour/tour_export.webm new file mode 100644 index 0000000000000000000000000000000000000000..fa3602ec54fd2fa869d273fca44dc4e509790fff Binary files /dev/null and b/docs/website/src/assets/tour/tour_export.webm differ diff --git a/docs/website/src/assets/tour/tour_import.webm b/docs/website/src/assets/tour/tour_import.webm new file mode 100644 index 0000000000000000000000000000000000000000..b2886fb28c6ef69ec0d0c23851d9a27c670c7b70 Binary files /dev/null and b/docs/website/src/assets/tour/tour_import.webm differ diff --git a/docs/website/src/assets/tour/tour_new_run.webm b/docs/website/src/assets/tour/tour_new_run.webm new file mode 100644 index 0000000000000000000000000000000000000000..d0768105551ce08674d893dea8849c40e8d22c22 Binary files /dev/null and b/docs/website/src/assets/tour/tour_new_run.webm differ diff --git a/docs/website/src/assets/tour/tour_performance_chart.webm b/docs/website/src/assets/tour/tour_performance_chart.webm new file mode 100644 index 0000000000000000000000000000000000000000..38668a14d90c121a0b89afec5ef289be066026ad Binary files /dev/null and b/docs/website/src/assets/tour/tour_performance_chart.webm differ diff --git a/docs/website/src/assets/tour/tour_provider.webm b/docs/website/src/assets/tour/tour_provider.webm new file mode 100644 index 0000000000000000000000000000000000000000..9933d44937a5e28a1f61b67f2e2a934df9dcbc2f Binary files /dev/null and b/docs/website/src/assets/tour/tour_provider.webm differ diff --git a/docs/website/src/assets/tour/tour_recycle_run.webm b/docs/website/src/assets/tour/tour_recycle_run.webm new file mode 100644 index 0000000000000000000000000000000000000000..d2a30ab5ba7c170c8c4132e4067a71108c77608e Binary files /dev/null and b/docs/website/src/assets/tour/tour_recycle_run.webm differ diff --git a/docs/website/src/assets/tour/tour_retire_flow.webm b/docs/website/src/assets/tour/tour_retire_flow.webm new file mode 100644 index 0000000000000000000000000000000000000000..5df948035461490876bb062a4ab61502738c8b61 Binary files /dev/null and b/docs/website/src/assets/tour/tour_retire_flow.webm differ diff --git a/docs/website/src/assets/tour/tour_tc_settings.webm b/docs/website/src/assets/tour/tour_tc_settings.webm new file mode 100644 index 0000000000000000000000000000000000000000..916b871351c7d221161d0c17eaf3c48149d29ce4 Binary files /dev/null and b/docs/website/src/assets/tour/tour_tc_settings.webm differ diff --git a/docs/website/src/assets/tour/tour_using_tags.webm b/docs/website/src/assets/tour/tour_using_tags.webm new file mode 100644 index 0000000000000000000000000000000000000000..ee63e08c8724775b2141d4fa7c080c080b8b9644 Binary files /dev/null and b/docs/website/src/assets/tour/tour_using_tags.webm differ diff --git a/docs/website/src/assets/tour/tour_wipe.webm b/docs/website/src/assets/tour/tour_wipe.webm new file mode 100644 index 0000000000000000000000000000000000000000..e582e119a1c9393ddb291e52d354cf4f55a5fb4b Binary files /dev/null and b/docs/website/src/assets/tour/tour_wipe.webm differ diff --git a/docs/website/src/assets/wwwherd_logo_black.svg b/docs/website/src/assets/wwwherd_logo_black.svg new file mode 100644 index 0000000000000000000000000000000000000000..766126bab5ef6fc3dd649d12513b6d0d06b97aef --- /dev/null +++ b/docs/website/src/assets/wwwherd_logo_black.svg @@ -0,0 +1 @@ +wwwherd_logo_black \ No newline at end of file diff --git a/docs/website/src/components/AppFooter.vue b/docs/website/src/components/AppFooter.vue new file mode 100644 index 0000000000000000000000000000000000000000..e100328f2e48c5c71704fdc20f96b7fa4a8b16ed --- /dev/null +++ b/docs/website/src/components/AppFooter.vue @@ -0,0 +1,14 @@ + + \ No newline at end of file diff --git a/docs/website/src/components/AppHeader.vue b/docs/website/src/components/AppHeader.vue new file mode 100644 index 0000000000000000000000000000000000000000..29206783a11dcaa6da2efd017e9da1b22e9b4166 --- /dev/null +++ b/docs/website/src/components/AppHeader.vue @@ -0,0 +1,68 @@ + + + + \ No newline at end of file diff --git a/docs/website/src/components/DemoSection.vue b/docs/website/src/components/DemoSection.vue new file mode 100644 index 0000000000000000000000000000000000000000..54996dc03c2bdb56e9af6d0d2459f11da40d4b09 --- /dev/null +++ b/docs/website/src/components/DemoSection.vue @@ -0,0 +1,74 @@ + + + + + + diff --git a/docs/website/src/components/FeatureSection.vue b/docs/website/src/components/FeatureSection.vue new file mode 100644 index 0000000000000000000000000000000000000000..1c4ccb5f70c854a838b273ac951cc3a9eac3fd9a --- /dev/null +++ b/docs/website/src/components/FeatureSection.vue @@ -0,0 +1,111 @@ + + + + + + diff --git a/docs/website/src/components/GetStartedSection.vue b/docs/website/src/components/GetStartedSection.vue new file mode 100644 index 0000000000000000000000000000000000000000..3ebfcb3605120d0b5d09bc688650c215096e4b41 --- /dev/null +++ b/docs/website/src/components/GetStartedSection.vue @@ -0,0 +1,53 @@ + + + + + \ No newline at end of file diff --git a/docs/website/src/components/MainNavigation.vue b/docs/website/src/components/MainNavigation.vue new file mode 100644 index 0000000000000000000000000000000000000000..88e9eed07acea692c90eee0877eeb8973a0682f4 --- /dev/null +++ b/docs/website/src/components/MainNavigation.vue @@ -0,0 +1,30 @@ + + + diff --git a/docs/website/src/components/WhatIsSection.vue b/docs/website/src/components/WhatIsSection.vue new file mode 100644 index 0000000000000000000000000000000000000000..49b65c5d9dff69859f1d072ce22b92f58125204e --- /dev/null +++ b/docs/website/src/components/WhatIsSection.vue @@ -0,0 +1,45 @@ + + + + \ No newline at end of file diff --git a/docs/website/src/main.ts b/docs/website/src/main.ts new file mode 100644 index 0000000000000000000000000000000000000000..017499d56f0001428e0c035fb0e37240e8ccdbda --- /dev/null +++ b/docs/website/src/main.ts @@ -0,0 +1,16 @@ +// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 +import { createApp } from 'vue'; +import vuetify from '@/plugins/vuetify'; +import router from '@/router'; +import App from '@/App.vue'; +import '@/assets/common.css' + +const app = createApp(App); + +// Add router to vue +app.use(router); + +// Add vuetify to vue +app.use(vuetify); + +app.mount('#app'); diff --git a/docs/website/src/plugins/vuetify.js b/docs/website/src/plugins/vuetify.js new file mode 100644 index 0000000000000000000000000000000000000000..835d08786343fda2d2376008beff1467f623d6b6 --- /dev/null +++ b/docs/website/src/plugins/vuetify.js @@ -0,0 +1,57 @@ +// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 +import * as components from 'vuetify/components'; +import * as directives from 'vuetify/directives'; +import { createVuetify } from 'vuetify'; +import { aliases, mdi } from 'vuetify/iconsets/mdi'; + +import '@mdi/font/css/materialdesignicons.css'; +import 'vuetify/styles'; + +const topTheme = { + dark: false, + colors: { + stdbg: '#dbcef3', + primary: '#5061ca', + surface: "#ffffff", + secondary: '#dff4df', + disabled: '#ACACAC', + accent: '#219653', + anchor: '#4dd0e1', + dialogBackground: '#a4f1e5', + buttonLive: '#0259fb', + buttonDisabled: '#6e6d6d', + bug: '#f6917e', + history: '#9a6d29', + appbar: '#bed7f3', + appbutton: '#e0e4fd', + softbutton: '#5061ca', + hibutton: '#1f9f5b', + clickable: '#111112', + info: '#0a0a0a', + error: '#ff0000', + ok: '#00aa00', + success: '#00ff00', + warning: '#ff0000', + wlabel: '#7707c1' + }, +}; +const vuetify = createVuetify({ + components, + directives, + icons: { + defaultSet: 'mdi', + aliases, + sets: { + mdi, + }, + }, + theme: { + defaultTheme: 'topTheme', + themes: { + topTheme, + }, + }, +}); + +export default vuetify; + diff --git a/docs/website/src/router.ts b/docs/website/src/router.ts new file mode 100644 index 0000000000000000000000000000000000000000..82355ae5201d109874d27248fb762c85eb68c889 --- /dev/null +++ b/docs/website/src/router.ts @@ -0,0 +1,31 @@ +// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 +import { createRouter, createWebHistory } from 'vue-router' +//import {useAuthStore} from "./stores/auth.ts"; + +const router = createRouter({ + history: createWebHistory(import.meta.env.BASE_URL), + routes: [ + { + path: '/', + name: 'main', + component: () => import('./routes/main/MainView.vue'), + }, + { + path: '/tour', + name: 'tour', + component: () => import('./routes/tour/TourView.vue'), + } + ] +}) + +router.beforeEach((to, from, next) => { + //const auth = useAuthStore() + //if (to.meta.requiresAuth && !auth.authToken) { + // next('/login') + //} else { + // next() + //} + next() +}) + +export default router \ No newline at end of file diff --git a/docs/website/src/routes/main/MainView.vue b/docs/website/src/routes/main/MainView.vue new file mode 100644 index 0000000000000000000000000000000000000000..cfcabbeef1fc8ac5dc8999dcb3956267209a6f06 --- /dev/null +++ b/docs/website/src/routes/main/MainView.vue @@ -0,0 +1,24 @@ +// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + + + diff --git a/docs/website/src/routes/tour/TourView.vue b/docs/website/src/routes/tour/TourView.vue new file mode 100644 index 0000000000000000000000000000000000000000..8101fb443c0b8fd470137c7bf8f06b92f90097df --- /dev/null +++ b/docs/website/src/routes/tour/TourView.vue @@ -0,0 +1,398 @@ + + + + + + diff --git a/docs/website/src/style.css b/docs/website/src/style.css new file mode 100644 index 0000000000000000000000000000000000000000..f6913154345dbceb3a705e2583ed9afb3857924e --- /dev/null +++ b/docs/website/src/style.css @@ -0,0 +1,79 @@ +:root { + font-family: system-ui, Avenir, Helvetica, Arial, sans-serif; + line-height: 1.5; + font-weight: 400; + + color-scheme: light dark; + color: rgba(255, 255, 255, 0.87); + background-color: #242424; + + font-synthesis: none; + text-rendering: optimizeLegibility; + -webkit-font-smoothing: antialiased; + -moz-osx-font-smoothing: grayscale; +} + +a { + font-weight: 500; + color: #646cff; + text-decoration: inherit; +} +a:hover { + color: #535bf2; +} + +body { + margin: 0; + display: flex; + place-items: center; + min-width: 320px; + min-height: 100vh; +} + +h1 { + font-size: 3.2em; + line-height: 1.1; +} + +button { + border-radius: 8px; + border: 1px solid transparent; + padding: 0.6em 1.2em; + font-size: 1em; + font-weight: 500; + font-family: inherit; + background-color: #1a1a1a; + cursor: pointer; + transition: border-color 0.25s; +} +button:hover { + border-color: #646cff; +} +button:focus, +button:focus-visible { + outline: 4px auto -webkit-focus-ring-color; +} + +.card { + padding: 2em; +} + +#app { + max-width: 1280px; + margin: 0 auto; + padding: 2rem; + text-align: center; +} + +@media (prefers-color-scheme: light) { + :root { + color: #213547; + background-color: #ffffff; + } + a:hover { + color: #747bff; + } + button { + background-color: #f9f9f9; + } +} diff --git a/docs/website/src/vite-env.d.ts b/docs/website/src/vite-env.d.ts new file mode 100644 index 0000000000000000000000000000000000000000..11f02fe2a0061d6e6e1f271b21da95423b448b32 --- /dev/null +++ b/docs/website/src/vite-env.d.ts @@ -0,0 +1 @@ +/// diff --git a/docs/website/tsconfig.app.json b/docs/website/tsconfig.app.json new file mode 100644 index 0000000000000000000000000000000000000000..1efa439706083405533f7e3619dd9a136d077016 --- /dev/null +++ b/docs/website/tsconfig.app.json @@ -0,0 +1,26 @@ +{ + "extends": "@vue/tsconfig/tsconfig.dom.json", + "include": [ + "env.d.ts", + "src/**/*", + "src/**/*.vue" + ], + "exclude": [ + "src/**/__tests__/*" + ], + "compilerOptions": { + "composite": true, + "tsBuildInfoFile": "./node_modules/.tmp/tsconfig.app.tsbuildinfo", + "baseUrl": ".", + "paths": { + "@/*": ["./src/*"] + }, + "types": ["vite/client"], + "moduleResolution": "bundler", + "allowImportingTsExtensions": true, + "strict": true, + "noEmit": true, + "isolatedModules": true, + "skipLibCheck": true + } +} \ No newline at end of file diff --git a/docs/website/tsconfig.json b/docs/website/tsconfig.json new file mode 100644 index 0000000000000000000000000000000000000000..1ffef600d959ec9e396d5a260bd3f5b927b2cef8 --- /dev/null +++ b/docs/website/tsconfig.json @@ -0,0 +1,7 @@ +{ + "files": [], + "references": [ + { "path": "./tsconfig.app.json" }, + { "path": "./tsconfig.node.json" } + ] +} diff --git a/docs/website/tsconfig.node.json b/docs/website/tsconfig.node.json new file mode 100644 index 0000000000000000000000000000000000000000..f85a39906e5571aa351e61e43fff98bc0bedaa27 --- /dev/null +++ b/docs/website/tsconfig.node.json @@ -0,0 +1,25 @@ +{ + "compilerOptions": { + "tsBuildInfoFile": "./node_modules/.tmp/tsconfig.node.tsbuildinfo", + "target": "ES2023", + "lib": ["ES2023"], + "module": "ESNext", + "skipLibCheck": true, + + /* Bundler mode */ + "moduleResolution": "bundler", + "allowImportingTsExtensions": true, + "verbatimModuleSyntax": true, + "moduleDetection": "force", + "noEmit": true, + + /* Linting */ + "strict": true, + "noUnusedLocals": true, + "noUnusedParameters": true, + "erasableSyntaxOnly": true, + "noFallthroughCasesInSwitch": true, + "noUncheckedSideEffectImports": true + }, + "include": ["vite.config.ts"] +} diff --git a/docs/website/vite.config.ts b/docs/website/vite.config.ts new file mode 100644 index 0000000000000000000000000000000000000000..e8ae1e986aabb9eca4a0d41347bef7ee1e2f7d85 --- /dev/null +++ b/docs/website/vite.config.ts @@ -0,0 +1,22 @@ +// https://vite.dev/config/ +import { defineConfig } from 'vite' +import vue from '@vitejs/plugin-vue' +import { fileURLToPath, URL } from 'node:url' + + +export default defineConfig({ + plugins: [vue()], + resolve: { + alias: { + '@': fileURLToPath(new URL('./src', import.meta.url)) + }, + }, + css: { + preprocessorOptions: { + scss: { + additionalData: `@import "@/styles/variables.scss";`, + }, + }, + }, +}) + diff --git a/integration/build/all/Dockerfile b/integration/build/all/Dockerfile index 4f8b4184362e156f8c5c34753f2f1490bdd7d7ce..47bed1c47574b5ed238a2926bd3029b489c3bee2 100644 --- a/integration/build/all/Dockerfile +++ b/integration/build/all/Dockerfile @@ -6,13 +6,13 @@ COPY integration/build/all/files/ /files/ RUN chmod 755 /files/*.sh RUN /files/provision.sh -# Magnitude -COPY magnitude/build/ginfra_service.exe /wwwherd/magnitude/ginfra_service.exe -COPY magnitude/src /wwwherd/magnitude/src -COPY magnitude/package.json /wwwherd/magnitude/package.json -COPY magnitude/specifics.json /wwwherd/magnitude/specifics.json -COPY integration/build/all/config/magnitude_config.json /wwwherd/magnitude/config.json -COPY integration/build/all/config/magnitude_service_config.json /wwwherd/magnitude/service_config.json +# Runner +COPY runner/build/ginfra_service.exe /wwwherd/runner/ginfra_service.exe +COPY runner/src /wwwherd/runner/src +COPY runner/package.json /wwwherd/runner/package.json +COPY runner/specifics.json /wwwherd/runner/specifics.json +COPY integration/build/all/config/runner_config.json /wwwherd/runner/config.json +COPY integration/build/all/config/runner_service_config.json /wwwherd/runner/service_config.json # GINFRA control and dispatch service. COPY service/build/ginfra_service.exe /wwwherd/service_ginfra/ginfra_service.exe diff --git a/integration/build/all/config/ginfra_config.json b/integration/build/all/config/ginfra_config.json index f5e466857d91f680a1c18f5c0a665770b5889c44..cd3b8e80e7a91a5e645fde4ae4adb15a8a7e9e08 100644 --- a/integration/build/all/config/ginfra_config.json +++ b/integration/build/all/config/ginfra_config.json @@ -7,7 +7,8 @@ "*": "<<>>" }, "services": { - "magnitude/test": "http://<<>>:8900/", + "server_otelcol/<<>>": "<<>>", + "runner/test": "http://<<>>:8900/", "wwwherd": "http://<<>>:8901/" } }, @@ -33,7 +34,7 @@ "service_class": "wwwherd", "specifics": { "postgres_url": "postgres://postgres:<<>>@<<>>:5432", - "magnitude_auth": "AAAAAAAAAAAA" + "runner_auth": "AAAAAAAAAAAA" } }, "service_auth": "AAAAAAAAAAAA" diff --git a/integration/build/all/config/magnitude_config.json b/integration/build/all/config/runner_config.json similarity index 88% rename from integration/build/all/config/magnitude_config.json rename to integration/build/all/config/runner_config.json index 9043d8678543fbe1f7319554a49caebce4985dd4..53d77db0108d71236452904167864db8979e7ce0 100644 --- a/integration/build/all/config/magnitude_config.json +++ b/integration/build/all/config/runner_config.json @@ -7,8 +7,9 @@ "*": "<<>>" }, "services": { - "magnitude/test": "http://<<>>:8900/", - "wwwherd": "http://<<>>:8901/" + "runner/test": "http://<<>>:8900/", + "wwwherd": "http://<<>>:8901/", + "server_otelcol/<<>>": "<<>>" } }, "services": { diff --git a/integration/build/all/config/magnitude_service_config.json b/integration/build/all/config/runner_service_config.json similarity index 100% rename from integration/build/all/config/magnitude_service_config.json rename to integration/build/all/config/runner_service_config.json diff --git a/integration/build/all/files/configure.sh b/integration/build/all/files/configure.sh index 7d1bafc33b2ba4b72981b22c2e5e7296182f42a0..bd48aeb4006e4247bd10949093b65d798a434241 100644 --- a/integration/build/all/files/configure.sh +++ b/integration/build/all/files/configure.sh @@ -34,13 +34,13 @@ replace_config_placeholders() { # ################################################################################################## # CONFIG FILES -replace_config_placeholders "/wwwherd/magnitude/config.json" "localhost" +replace_config_placeholders "/wwwherd/runner/config.json" "localhost" replace_config_placeholders "/wwwherd/service_ginfra/config.json" "localhost" # ################################################################################################## -# MAGNITUDE -cd /wwwherd/magnitude +# RUNNER +cd /wwwherd/runner npm install npx playwright install diff --git a/integration/build/all/files/entrypoint.sh b/integration/build/all/files/entrypoint.sh index 6bef226b0ba6f5a5bf3cddb19d15a663e2191f9b..d0b2f9bd76092368d15e77dedb588cbbd0e2d196 100644 --- a/integration/build/all/files/entrypoint.sh +++ b/integration/build/all/files/entrypoint.sh @@ -28,18 +28,45 @@ replace_config_placeholders() { echo "Config file $config_file updated successfully" } +# JWT SECRET +if [[ ! -f "/jwt_secret" ]]; then + # Generate 64 character random alphanumeric string (upper and lowercase letters + numbers) + JWT_SECRET=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 64 | head -n 1) + echo "$JWT_SECRET" > /jwt_secret +fi +export WWWHERD_JWT_SECRET=$(cat /jwt_secret) pg_ctlcluster 16 main start -# MAGNITUDE -cd /wwwherd/magnitude -./ginfra_service.exe /wwwherd/magnitude >> /wwwherd/magnitude/log/service.log 2>&1 & +# RUNNER + +# Replace OTELENDPOINT if environment variable is set +if [[ -n "$OTELENDPOINT" ]]; then + sed -i "s|<<>>|$OTELENDPOINT|g" "/wwwherd/runner/config.json" + sed -i "s/<<>>/collector/g" "/wwwherd/runner/config.json" + echo "OTELENDPOINT placeholder replaced with: $OTELENDPOINT" +else + echo "Warning: OTELENDPOINT environment variable not set, placeholder not replaced" +fi + +cd /wwwherd/runner +./ginfra_service.exe /wwwherd/runner >> /wwwherd/runner/log/service.log 2>&1 & # API SERVICE cd /wwwherd/api node main.js >> /wwwherd/api/log/service.log 2>&1 & -# GINGRA CONTROL AND DISPATCH SERVICE +# GINFRA CONTROL AND DISPATCH SERVICE + +# Replace OTELENDPOINT if environment variable is set +if [[ -n "$OTELENDPOINT" ]]; then + sed -i "s|<<>>|$OTELENDPOINT|g" "/wwwherd/service_ginfra/config.json" + sed -i "s/<<>>/collector/g" "/wwwherd/service_ginfra/config.json" + echo "OTELENDPOINT placeholder replaced with: $OTELENDPOINT" +else + echo "Warning: OTELENDPOINT environment variable not set, placeholder not replaced" +fi + cd /wwwherd/service_ginfra ./ginfra_service.exe /wwwherd/service_ginfra >> /wwwherd/service_ginfra/log/service.log 2>&1 & diff --git a/integration/build/all/files/provision.sh b/integration/build/all/files/provision.sh index b20787a8e51da83a370f953e19392b56c0e4a9ec..81827b80dfc171108de78132085d638b187c624a 100644 --- a/integration/build/all/files/provision.sh +++ b/integration/build/all/files/provision.sh @@ -19,15 +19,15 @@ chmod 755 /entrypoint.sh mkdir /wwwherd mkdir /wwwherd/import -# MAGNITUDE SERVER -mkdir /wwwherd/magnitude -mkdir /wwwherd/magnitude/log +# RUNNER SERVER +mkdir /wwwherd/runner +mkdir /wwwherd/runner/log # GINFRA control and dispatch service. mkdir /wwwherd/service_ginfra mkdir /wwwherd/service_ginfra/schema mkdir /wwwherd/service_ginfra/log -mkdir /wwwherd/service_ginfra/magnitude +mkdir /wwwherd/service_ginfra/runner # UI server. mkdir /wwwherd/ui diff --git a/magnitude/service/common/utils.go b/magnitude/service/common/utils.go deleted file mode 100644 index 75b39bab3b8347d0d1feaa2275f712da7fb80410..0000000000000000000000000000000000000000 --- a/magnitude/service/common/utils.go +++ /dev/null @@ -1,105 +0,0 @@ -/* -Package common -Copyright (c) 2025 Erich Gatejen -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt - -Common data. -*/ -package common - -import ( - "bufio" - "encoding/json" - "fmt" - "gitlab.com/ginfra/ginfra/base" - sflow "gitlab.com/ginfra/wwwherd/common/script/flow" - "gitlab.com/ginfra/wwwherd/magnitude/local" - "go.uber.org/zap" - "io" - "strings" -) - -func CleanLog(dirtext string, dirlog []interface{}) ([]interface{}, string, bool) { - filtered := make([]interface{}, 0) - errored := false - errtext := "" - for _, entry := range dirlog { - if e, ok := entry.(map[string]interface{}); ok { - if msg, ok := e["message"].(string); ok { - if !strings.HasPrefix(msg, dirtext) && !strings.Contains(msg, "[start] agent") { - filtered = append(filtered, entry) - } - } - if t, ok := e["type"].(string); ok && t == "ERROR" { - // Case has errored - errored = true - // Handle e["message"] - can be string, json, or non-existent - var messageStr string - if msgValue, exists := e["message"]; exists { - switch v := msgValue.(type) { - case string: - messageStr = v - case map[string]interface{}, []interface{}: - // If it's JSON (object or array), marshal it back to string - if jsonBytes, err := json.Marshal(v); err == nil { - messageStr = string(jsonBytes) - } else { - messageStr = fmt.Sprintf("%v", v) // fallback - } - default: - // For any other type, convert to string - messageStr = fmt.Sprintf("%v", v) - } - } else { - // message doesn't exist - messageStr = "No error message provided." - } - errtext = messageStr - } - } - } - return filtered, errtext, errored -} - -func RunDirective(directive sflow.Directive, stdin io.WriteCloser, scanner *bufio.Scanner) ([]interface{}, error) { - // Send command. - if _, err := stdin.Write([]byte(directive.String() + "\n")); err != nil { - panic(fmt.Errorf("failed to write directive to stdin: %w", err)) - } - - // Flush the stdin to ensure the command is sent immediately - if flusher, ok := stdin.(interface{ Flush() error }); ok { - if err := flusher.Flush(); err != nil { - panic(fmt.Errorf("failed to flush stdin: %w", err)) - } - } - local.Logger.Debug("Directive sent", zap.String("directive", directive.String())) - - open := 0 - var output strings.Builder - // Snarf the output json. - for scanner.Scan() { - line := strings.TrimSpace(scanner.Text()) - if line == "" || (strings.HasPrefix(line, "{") && open < 1) { // log leakage. - continue - } - if strings.HasPrefix(line, "[") { - open++ - } else if strings.HasSuffix(line, "]") { - open-- - } - if open < 1 { - output.WriteString("]\n") - break - } - output.WriteString(line + "\n") - } - - var jsonObj []interface{} - if err := json.Unmarshal([]byte(output.String()), &jsonObj); err != nil { - local.Logger.Error("Failed to parse JSON from RunDirective.", zap.Error(err), zap.String("output", output.String())) - return jsonObj, base.NewGinfraError("Process had a hard failure. Seeing these occasionally is normal, as AI providers can experience varying reliability and performance. If it happens frequently, contact support. ") - } - - return jsonObj, nil -} diff --git a/magnitude/service/flow/script.go b/magnitude/service/flow/script.go deleted file mode 100644 index da3e4e30ecf3b1bb9a4e8b3c3ce390aa6518667d..0000000000000000000000000000000000000000 --- a/magnitude/service/flow/script.go +++ /dev/null @@ -1,122 +0,0 @@ -/* -Package flow -Copyright (c) 2025 Erich Gatejen -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt - -Flow script -*/ -package flow - -import ( - "bufio" - "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/ginfra/common" - "gitlab.com/ginfra/wwwherd/common/data" - "gitlab.com/ginfra/wwwherd/common/script" - sflow "gitlab.com/ginfra/wwwherd/common/script/flow" - "gitlab.com/ginfra/wwwherd/magnitude/local" - scommon "gitlab.com/ginfra/wwwherd/magnitude/service/common" - "go.uber.org/zap" -) - -type Dispatcher struct { -} - -func GetDispatcher() *Dispatcher { - return &Dispatcher{} -} - -func (d *Dispatcher) RunCase(runner *script.Runner, thecase *data.OrderCaseItem, fdec *data.Decorations, - values map[string]interface{}, prompt string) ([]interface{}, error) { - - var ( - err error - dirlog []interface{} - log = make([]interface{}, 0, len(thecase.Compiled.(sflow.CompiledCase).Directives)+1) - ) - - // Get the CompiledCase from the run - compiledCase, ok := thecase.Compiled.(sflow.CompiledCase) - if !ok { - panic("invalid compiled case type for flow.") - } - - // Scanner to read stdout line by line - scanner := bufio.NewScanner(runner.Stdout) - - // Flow prompt? - if thecase.Decorated.Strict && !fdec.Strict { - prompt = prompt + scommon.PromptStrict - } - - var w sflow.FlowDirectiveType - if prompt == "" { - w = sflow.DirectivePromptGlobalClear - } else { - w = sflow.DirectivePromptGlobal - } - if dirlog, err = scommon.RunDirective(sflow.Directive{ - Type: w, - Text: prompt, - LineNum: 0, - }, runner.Stdin, scanner); err != nil { - local.Logger.Error("Error setting or clearing flow prompt. Terminating.", zap.Error(err)) - } - wf, _, we := scommon.CleanLog(w.String(), dirlog) - log = append(log, map[string]interface{}{ - "directive": w.String(), - "log": wf, - "linenumber": 0, - }) - if we { - return log, base.NewGinfraError("Error setting or clearing flow prompt.") - } - - for _, directive := range compiledCase.Directives { - - // Value replacement in directive. - if directive.Text, err = common.SimpleReplacer(directive.Text, values, true); err != nil { - local.Logger.Error("Error in script. Terminating.", zap.Error(err)) - thecase.Error = err.Error() - thecase.Result = data.RunStatusError - break - } - - if dirlog, err = scommon.RunDirective(directive, runner.Stdin, scanner); err != nil { - local.Logger.Error("Error in script. Terminating.", zap.Error(err)) - thecase.Error = err.Error() - thecase.Result = data.RunStatusError - break - } - - // Clean log and check for failure. - // TODO mess code. Clean it up. - if dirlog != nil { - filtered, etemp, errored := scommon.CleanLog(directive.Type.String(), dirlog) - if etemp != "" { - thecase.Error = etemp - } - - // Save to logs - log = append(log, map[string]interface{}{ - "directive": directive.Type.String(), - "text": directive.Text, - "log": filtered, - "linenumber": directive.LineNum, - }) - - // Handle an error - if errored { - thecase.Result = data.RunStatusError - break - } else { - thecase.Result = data.RunStatusPassed - } - - } else { - thecase.Result = data.RunStatusPassed - } - } - - return log, err -} diff --git a/magnitude/service/raw/script.go b/magnitude/service/raw/script.go deleted file mode 100644 index dd7e5c2892b2853f5291ff6c6af0036042910f63..0000000000000000000000000000000000000000 --- a/magnitude/service/raw/script.go +++ /dev/null @@ -1,80 +0,0 @@ -/* -Package raw -Copyright (c) 2025 Erich Gatejen -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt - -Raw format. -*/ -package raw - -import ( - "bufio" - "gitlab.com/ginfra/ginfra/common" - "gitlab.com/ginfra/wwwherd/common/data" - "gitlab.com/ginfra/wwwherd/common/script" - sflow "gitlab.com/ginfra/wwwherd/common/script/flow" - "gitlab.com/ginfra/wwwherd/magnitude/local" - scommon "gitlab.com/ginfra/wwwherd/magnitude/service/common" - "go.uber.org/zap" -) - -type Dispatcher struct { -} - -func GetDispatcher() *Dispatcher { - return &Dispatcher{} -} - -func (d *Dispatcher) RunCase(runner *script.Runner, thecase *data.OrderCaseItem, fdec *data.Decorations, - values map[string]interface{}, prompt string) ([]interface{}, error) { - var ( - err error - dirlog []interface{} - filtered []interface{} - errored bool - etemp string - log = make([]interface{}, 0, len(thecase.Compiled.(sflow.CompiledCase).Directives)+1) - ) - - // Scanner to read stdout line by line - scanner := bufio.NewScanner(runner.Stdout) - - if prompt, err = common.SimpleReplacer(prompt, values, true); err != nil { - local.Logger.Error("Error in script.", zap.Error(err)) - thecase.Error = err.Error() - thecase.Result = data.RunStatusError - - } else { - if dirlog, err = scommon.RunDirective(sflow.Directive{ - Type: sflow.DirectiveRaw, - Text: prompt, - LineNum: 0, - }, runner.Stdin, scanner); err != nil { - local.Logger.Error("Error in script.", zap.Error(err)) - } - - // Clean log and check for failure. - filtered, etemp, errored = scommon.CleanLog(sflow.DirectiveRaw.String(), dirlog) - if etemp != "" { - thecase.Error = etemp - } - - } - - // Save to logs - log = append(log, map[string]interface{}{ - "directive": sflow.DirectiveRaw.String(), - "text": "See UI", - "log": filtered, - "linenumber": 0, - }) - - // Handle an error - if errored || err != nil { - thecase.Result = data.RunStatusError - } else { - thecase.Result = data.RunStatusPassed - } - - return log, err -} diff --git a/magnitude/service/service_store.go b/magnitude/service/service_store.go deleted file mode 100644 index 9833cc8b904665ea02f5e4b9c071bf6f6bf74b31..0000000000000000000000000000000000000000 --- a/magnitude/service/service_store.go +++ /dev/null @@ -1,187 +0,0 @@ -/* -Package service -Copyright (c) 2025 Erich Gatejen -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt, if unaltered. You are free to alter and apply any license -you wish to this file. - -Work maangement. -*/ -package service - -import ( - cmap "github.com/orcaman/concurrent-map/v2" - "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/wwwherd/magnitude/service/common" -) - -//type RunSpecMap cmap.ConcurrentMap[string, *RunSpec] -- This just puts Goland in a tizzy, so remove it for now. - -var ( - runQueue = cmap.New[[]*common.RunSpec]() - running = cmap.New[cmap.ConcurrentMap[string, *common.RunSpec]]() - runHistory = cmap.New[cmap.ConcurrentMap[string, *common.RunSpec]]() -) - -func initStore() { - runQueue.Clear() - running.Clear() - runHistory.Clear() -} - -// -- RUN QUEUE --------------------------------------------------------------------------------------------------- - -func runQueuePut(uid string, run *common.RunSpec) { - if !runQueue.Has(uid) { - runQueue.Set(uid, []*common.RunSpec{}) - } - if queue, ok := runQueue.Get(uid); ok { - queue = append(queue, run) - runQueue.Set(uid, queue) - } -} - -func runQueueArray(uid string) []*common.RunSpec { - // Make a copy - specs := make([]*common.RunSpec, 0) - if m, ok := runQueue.Get(uid); ok { - for _, item := range m { - specs = append(specs, item) - } - } - return specs -} - -// -- RUNNING --------------------------------------------------------------------------------------------------- - -func runningNumberRunning(uid string) int { - if i, ok := runQueue.Get(uid); ok { - return len(i) - } - return 0 -} - -func runningGetUid(uid string) cmap.ConcurrentMap[string, *common.RunSpec] { - if !running.Has(uid) { - m := cmap.New[*common.RunSpec]() - running.Set(uid, m) - return m - } - m, _ := running.Get(uid) - return m -} - -func runningPut(uid, id string, run *common.RunSpec) { - m := runningGetUid(uid) - m.Set(id, run) -} - -func runningGetArray(uid string) []*common.RunSpec { - specs := make([]*common.RunSpec, 0) - if m, ok := running.Get(uid); ok { - for item := range m.IterBuffered() { - specs = append(specs, item.Val) - } - } - return specs -} - -// -- HISTORY -------------------------------------------------------------------------------------------------- - -func historyGetUid(uid string) cmap.ConcurrentMap[string, *common.RunSpec] { - if !runHistory.Has(uid) { - m := cmap.New[*common.RunSpec]() - runHistory.Set(uid, m) - return m - } - m, _ := runHistory.Get(uid) - return m -} - -func historyPut(uid, id string, run *common.RunSpec) { - m := historyGetUid(uid) - m.Set(id, run) -} - -func historyRemove(uid, id string) *common.RunSpec { - var r *common.RunSpec - if m, ok := runHistory.Get(uid); ok { - if r, ok = m.Get(id); ok { - m.Remove(id) - } - if m.Count() == 0 { - runHistory.Remove(uid) - } - } - return r -} - -func historyGetArray(uid string) []*common.RunSpec { - specs := make([]*common.RunSpec, 0) - if m, ok := runHistory.Get(uid); ok { - for item := range m.IterBuffered() { - specs = append(specs, item.Val) - } - } - return specs -} - -// -- ALL -------------------------------------------------------------------------------------- - -func moveFromRunningToHistory(run *common.RunSpec) { - historyPut(run.Uid, run.Id, run) - if runMap, ok := running.Get(run.Uid); ok { - runMap.Remove(run.Id) - if runMap.Count() == 0 { - running.Remove(run.Uid) - } - } -} - -func findRunSpec(uid, id string) (*common.RunSpec, error) { - - for l := range running.IterBuffered() { - lv := l.Val - for s := range lv.IterBuffered() { - spec := s.Val - if spec.Id == id { - if spec.Uid == uid { - return spec, nil - } else { - return nil, base.NewGinfraErrorAFlags("RUNNING: Run does not belong to user.", - base.ERRFLAG_BAD_ID, base.LM_ID, id, base.LM_UID, uid) - } - } - } - } - - for l := range runQueue.IterBuffered() { - lv := l.Val - for _, spec := range lv { - if spec.Id == id { - if spec.Uid == uid { - return spec, nil - } else { - return nil, base.NewGinfraErrorAFlags("QUEUE: Run does not belong to user.", - base.ERRFLAG_BAD_ID, base.LM_ID, id, base.LM_UID, uid) - } - } - } - } - - for l := range runHistory.IterBuffered() { - lv := l.Val - for s := range lv.IterBuffered() { - spec := s.Val - if spec.Id == id { - if spec.Uid == uid { - return spec, nil - } else { - return nil, base.NewGinfraErrorAFlags("HISTORY: Test does not belong to user.", - base.ERRFLAG_BAD_ID, base.LM_ID, id, base.LM_UID, uid) - } - } - } - } - - return nil, base.NewGinfraErrorAFlags("Id not found.", base.ERRFLAG_NOT_FOUND, base.LM_ID, id) -} diff --git a/magnitude/service/service_work.go b/magnitude/service/service_work.go deleted file mode 100644 index 629d9ec965132ef51908d47c1f2ec5b0ef16ed06..0000000000000000000000000000000000000000 --- a/magnitude/service/service_work.go +++ /dev/null @@ -1,440 +0,0 @@ -/* -Package service -Copyright (c) 2025 Erich Gatejen -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt, if unaltered. You are free to alter and apply any license -you wish to this file. - -Work management. -*/ -package service - -import ( - "encoding/json" - "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/wwwherd/common/data" - "gitlab.com/ginfra/wwwherd/common/script" - fscript "gitlab.com/ginfra/wwwherd/common/script/flow" - rscript "gitlab.com/ginfra/wwwherd/common/script/raw" - "gitlab.com/ginfra/wwwherd/common/stream" - "gitlab.com/ginfra/wwwherd/magnitude/local" - scommon "gitlab.com/ginfra/wwwherd/magnitude/service/common" - "gitlab.com/ginfra/wwwherd/magnitude/service/flow" - "gitlab.com/ginfra/wwwherd/magnitude/service/raw" - "go.uber.org/zap" - "os" - "path/filepath" - "strconv" - "strings" - "time" -) - -// ##################################################################################################################### -// # VALUES AND DATA - -// Used during translation. -// ##################################################################################################################### -// # WORK - -// -- orders ---------------------------------------------------------------------------------------------------- - -func findPrompt(text string) string { - lines := strings.Split(text, "\n") - var result []string - found := false - - for _, line := range lines { - if strings.Contains(line, "##PROMPT") { - if found { - break - } - found = true - continue - } - if found { - result = append(result, line) - } - } - - if len(result) == 0 { - return "" - } - return strings.Join(result, "\n") -} - -func sendStatus(client stream.StreamClient, run *scommon.RunSpec, status data.RunStatus, data string, wid int) { - run.Status = status - if err := client.Send(stream.NewMsgRunStatus(run.IdInt, status, data)); err != nil { - local.Logger.Error("Failed to send status.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, wid)) - } -} - -// -- worker ------------------------------------------------------------------------------------------------------- - -func getRunner(target, dfile, provider, model, key, url string) (*script.Runner, error) { - return script.NewRunner(target, dfile, provider, model, key, url) -} - -func getCaseDispatch(thecase *data.OrderCaseItem) script.Dispatcher { - switch thecase.Type { - case data.CaseTypeFlowScript: - return flow.GetDispatcher() - case data.CaseTypeRaw: - return raw.GetDispatcher() - default: - panic("Unknown case type.") - } -} - -func getCaseCompiler(thecase *data.OrderCaseItem) script.Compiler { - switch thecase.Type { - case data.CaseTypeFlowScript: - return &fscript.Compiler{} - case data.CaseTypeRaw: - return &rscript.Compiler{} - default: - panic("Unknown case type.") - } -} - -func compileOrders(spec *scommon.RunSpec) int { - if err := json.Unmarshal([]byte(spec.Text), &spec.Order); err != nil { - spec.Err = base.NewGinfraErrorA("Failed to unmarshal orders.", err).Error() - return 1 - } - - errnum := 0 - for flIdx := range spec.Order.Flows { - fl := &spec.Order.Flows[flIdx] - - // Range through all cases in the current fl - for caseIdx := range fl.Cases { - case_ := &fl.Cases[caseIdx] - - // Compile the case using caseCompileDispatch - compiler := getCaseCompiler(case_) - cc, errors := compiler.CompileCase(case_.Order) - - if errors != nil && len(errors) > 0 { - // If there are errors, put them into the Error field and set Result to RunStatusError - case_.Error = base.NewGinfraErrorA("Compile failed.", errors).Error() - case_.Result = data.RunStatusError - errnum++ - } else { - // If no error, put the compiled result into the Compiled field - case_.Compiled = cc - } - } - } - return errnum -} - -func shipOrders(run *scommon.RunSpec, id int, client stream.StreamClient, final bool) { - orderText, err := json.Marshal(run.Order) - if err != nil { - local.Logger.Error("Failed to marshal orders.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } else if err = client.Send(stream.NewMsgRunOrders(run.IdInt, string(orderText), final)); err != nil { - local.Logger.Error("Failed to send orders.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } -} - -func managePanic(run *scommon.RunSpec, id int, client stream.StreamClient, r any) { - - dfile, err := os.ReadFile(filepath.Join(run.Path, local.DispositionFile)) - if err == nil { - if strings.Contains(string(dfile), "Agent start failed: page.goto") { - sendStatus(client, run, data.RunStatusError, "Could not connect to target host: "+run.Host, id) - return - } - local.Logger.Error("Unknown panic recovered in worker", zap.Any("panic", r), - zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid)) - - } else { - local.Logger.Error("Failed to read disposition file.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - - sendStatus(client, run, data.RunStatusError, "serious internal server error", id) -} - -func runCases(run *scommon.RunSpec, logfile *Logger, client stream.StreamClient, fprompt string, id, flowidx int, - fdec *data.Decorations) data.ResultReport { - var ( - err error - ab bool // First failure that isn't Soft breaks the flow. - r data.ResultReport - ) - for caseidx, _ := range run.Order.Flows[flowidx].Cases { - cased := &run.Order.Flows[flowidx].Cases[caseidx] - _ = logfile.Log(" {\"start.case\":\"" + cased.Name + "\"}") - if run.Canceled { - cased.Result = data.RunStatusCancelled - r.Skipped++ - } else if ab { - cased.Result = data.RunStatusSkipped - r.Skipped++ - } else { - var ( - e error - log []interface{} - ) - cased.Result = data.RunStatusRunning - if err = client.Send(stream.NewMsgRunCurrentCase(run.IdInt, cased.Id)); err != nil { - local.Logger.Error("Failed to send current case message.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - - d := getCaseDispatch(cased) - startTime := time.Now() - if log, e = d.RunCase(run.Runner, cased, fdec, run.Values, fprompt); e != nil { - // Failed to run properly. - cased.Error = base.NewGinfraErrorA("Case run failed.", e).Error() - ab = true - cased.Result = data.RunStatusError - r.Errored++ - } else if cased.IsFailed() { - if !cased.Decorated.Soft { - ab = true - } - cased.Result = data.RunStatusFailed - r.Failed++ - } else { - cased.Result = data.RunStatusPassed - r.Passed++ - } - cased.Time = time.Since(startTime).Milliseconds() - - err = logfile.Log(log) - if err != nil { - local.Logger.Warn("Failed to marshal log data to JSON.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - - // Update the ui - shipOrders(run, id, client, false) - } - } - - return r -} - -func runFlows(run *scommon.RunSpec, logfile *Logger, id int, client stream.StreamClient) data.ResultReport { - var ( - err error - r data.ResultReport - ab bool - ) - - for flowidx, flowd := range run.Order.Flows { - - if ab { - run.Order.Flows[flowidx].Result = data.RunStatusSkipped - continue - } - - run.Order.Flows[flowidx].Decorated = data.DecorationsFromString(run.Order.Flows[flowidx].Decorations) - - _ = logfile.Log(" {\"start.flow\":\"" + flowd.Name + "\"}") - - run.Order.Flows[flowidx].Result = data.RunStatusRunning - if flowd.Host != "" { - run.Host = flowd.Host // TODO flowd level host is for future feature work. - } - - // if no host has been set yet, the run will fail. - if run.Host == "" { - run.Order.Flows[flowidx].Error = "No host test for run. Run will fail. Generally, you should set host when starting your run." - run.Order.Flows[flowidx].Result = data.RunStatusError - break - } - - if err = client.Send(stream.NewMsgRunCurrentFlow(run.IdInt, flowd.Id)); err != nil { - local.Logger.Error("Failed to send current flowd message.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - - fprompt := findPrompt(flowd.Narrative) // Top level flow prompt - if run.Order.Flows[flowidx].Decorated.Strict { - fprompt = scommon.PromptStrict + fprompt - } - - startTime := time.Now() - report := runCases(run, logfile, client, fprompt, id, flowidx, &run.Order.Flows[flowidx].Decorated) - run.Order.Flows[flowidx].Time = time.Since(startTime).Milliseconds() - - // Flow disposition - if run.Canceled { - run.Order.Flows[flowidx].Result = data.RunStatusCancelled - r.Skipped++ - } else if report.Failed > 0 { - run.Order.Flows[flowidx].Result = data.RunStatusFailed - run.Order.Flows[flowidx].Error = "Failures while running flow." - if !run.Order.Flows[flowidx].Decorated.Soft { - ab = true - run.Order.Flows[flowidx].Error = "Failures while running flow. Aborting" - } - r.Failed++ - } else if report.Errored > 0 { - run.Order.Flows[flowidx].Result = data.RunStatusError - run.Order.Flows[flowidx].Error = "Errors while running flow. Aborting" - ab = true - r.Errored++ - } else if report.Passed > 0 { - run.Order.Flows[flowidx].Result = data.RunStatusPassed - r.Passed++ - } else { - run.Order.Flows[flowidx].Result = data.RunStatusDone - r.Passed++ - } - - local.Logger.Info("Magnitude flow complete.", zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), - zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_NAME, run.Order.Flows[flowidx].Name)) - } - - return r -} - -func worker(id int, client stream.StreamClient) { - for run := range runChannel { - // TODO this is kind of touchy For now, just ignore everything that isn't starting. - if run.Status != data.RunStatusRunning { - continue - } - - func() { - - // -- COMPLETION -------------------------------------------------------------------------------------------- - // Everything that has to happen when we leave the function. - defer func() { - _ = client.Send(stream.NewMsgRunCurrentFlow(run.IdInt, 0)) // Ignore the errors because if there is problem it would have already been logged below. - _ = client.Send(stream.NewMsgRunCurrentCase(run.IdInt, 0)) - - shipOrders(run, id, client, true) - local.Logger.Info("Run orders sent.", zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), - zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_STATUS, run.Status.String()), zap.String(base.LM_NAME, - run.Name)) - - if r := recover(); r != nil { - managePanic(run, id, client, r) - } - - moveFromRunningToHistory(run) - - if run.Runner != nil { - err := run.Runner.Kill() - if err != nil { - local.Logger.Error("Failed to kill runner. Process may be ghosted", zap.Error(err), - zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - } - - local.Logger.Info("Run processing complete.", zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), - zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_STATUS, run.Status.String()), zap.String(base.LM_NAME, - run.Name)) - }() - - // -- COMPILE ------------------------------------------------------------------------------------------------- - // TODO earlier version of this service locked the various run stores, but I'm thinking we dont really need to do that. - numerr := compileOrders(run) - if numerr > 0 { - run.Status = data.RunStatusError - run.Err = "Errors while compiling. Number of errors: " + strconv.Itoa(numerr) + "." - local.Logger.Error("Magnitude test error(s) in compile.", zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int("worker", id), zap.Int(base.LM_NUMBER, numerr)) - sendStatus(client, run, run.Status, run.Err, id) - moveFromRunningToHistory(run) - shipOrders(run, id, client, false) - return - } - - // -- EXECUTION --------------------------------------------------------------------------------------------- - var err error - if run.Runner, err = getRunner(run.Host, filepath.Join(run.Path, local.DispositionFile), run.Prov, run.ProvModel, - run.ProvKey, run.ProvUrl); err != nil { - panic(base.NewGinfraErrorChild("Could not get a runner", err)) - } - - logfile, err := NewLogger(getLogPath(run.Path, run.Uid, run.Id)) - if err != nil { - local.Logger.Error("Failed to open log file.", zap.Error(err), zap.String(base.LM_ID, run.Id)) - } - - sendStatus(client, run, data.RunStatusRunning, "Running", id) - local.Logger.Info("Magnitude running test.", zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int("worker", id)) - - report := runFlows(run, logfile, id, client) - - // -- DISPOSITION ---------------------------------------------------------------------------------------------- - // TODO Tiny window when a stop order might catch the runningCmd still live. - if run.Canceled { - run.Err = "Cancelled" - sendStatus(client, run, data.RunStatusCancelled, run.Err, id) - } else if report.Errored > 0 { - run.Err = base.NewGinfraError("Errors while running.").Error() - sendStatus(client, run, data.RunStatusError, run.Err, id) - } else if report.Failed > 0 { - run.Err = base.NewGinfraError("Failures while running.").Error() - sendStatus(client, run, data.RunStatusFailed, run.Err, id) - } else if report.Passed > 0 { - run.Err = base.NewGinfraError("Passed.").Error() - sendStatus(client, run, data.RunStatusPassed, run.Err, id) - } else { - run.Err = base.NewGinfraError("Complete, no errors").Error() - sendStatus(client, run, data.RunStatusDone, run.Err, id) - } - - // We are done. Send the log and log the done. - _ = logfile.Close() - content, err := logfile.GetContents() - if err == nil { - - jcontent := data.FormatAsJSON(content) - - // TODO Since it is going into postgres for now, it will be compressed there. These should never get too big, - // it would be ok to leave it in the db over the long run. But... then we can't properly index and scrap it - // for search an analytics. Eventually it needs to move somewhere else. - b64content := data.EncodeToBase64([]byte(jcontent)) - - uidInt, err2 := strconv.Atoi(run.Uid) - idInt, err3 := strconv.Atoi(run.Id) - if err2 != nil || err3 != nil { - local.Logger.Error("BUG: During log save, failed to convert run uid and id to int. Not sending the log", - zap.Error(err), zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } else { - - if err = client.Send(stream.NewMsgRunLog(uidInt, idInt, b64content)); err != nil { - local.Logger.Error("Failed to send run log.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id)) - } - - // We will delete the log here if everything goes right. Not now we will leave the log if things go wrong, - // but eventually they need to be culled too. - if err = os.RemoveAll(run.Path); err != nil { - local.Logger.Error("Failed to delete run directory. It is ghosted.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_PATH, run.Path)) - } - - } - } else { - local.Logger.Info("Could not get run log.", zap.Error(err), zap.String(base.LM_ID, run.Id), - zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_STATUS, - run.Status.String()), zap.String(base.LM_NAME, run.Name)) - } - local.Logger.Info("Run complete.", zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid), - zap.Int(base.LM_WORKER_ID, id), zap.String(base.LM_STATUS, run.Status.String()), zap.String(base.LM_NAME, - run.Name)) - }() - } - doneChannel <- true -} - -func initWorkerPool(client stream.StreamClient) { - runChannel = make(chan *scommon.RunSpec, workerPoolSize) - doneChannel = make(chan bool) - for i := 0; i < workerPoolSize; i++ { - go worker(i, client) - } -} diff --git a/magnitude/src/index.ts b/magnitude/src/index.ts deleted file mode 100644 index f68b0c82c446dce632f47aa516844cda0dd4e86d..0000000000000000000000000000000000000000 --- a/magnitude/src/index.ts +++ /dev/null @@ -1,326 +0,0 @@ -// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 -import * as readline from 'readline'; -import {AgentOptions, BrowserConnectorOptions, startBrowserAgent} from 'magnitude-core'; -import z from "zod"; -import EventEmitter from "node:events"; -import * as fs from 'fs'; - -// =================================================================================================================== -// How to use: -// ARG1: Target host url - -function sleep(ms: number): Promise { - return new Promise(resolve => setTimeout(resolve, ms)); -} - -// =================================================================================================================== -// Console output capture - -interface CapturedLogEntry { - timestamp: string; - type: string; - message: string; -} - -const capturedOutput: CapturedLogEntry[] = []; - -type ConsoleMethod = typeof console.log; -const createConsoleCapture = ( - type: string -): ConsoleMethod => { - return (arg: any): void => { - let processedArg: any; - - try { - // Check if arg is already a valid JSON object/string - if (typeof arg === 'object' && arg !== null) { - processedArg = arg; - } else if (typeof arg === 'string') { - // Try to parse as JSON first - try { - processedArg = JSON.parse(arg); - } catch { - // If not JSON, clean terminal formatting and keep as string - processedArg = arg.replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); - } - } else { - // For other types, convert to string and clean formatting - processedArg = String(arg).replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); - } - } catch (error) { - // If anything fails, convert to string representation - processedArg = String(arg).replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); - } - - const logEntry: CapturedLogEntry = { - timestamp: new Date().toISOString(), - type: type, - message: processedArg - }; - - capturedOutput.push(logEntry); - }; -}; - -// Override console methods to capture output -console.log = createConsoleCapture('LOG'); -console.error = createConsoleCapture('ERROR'); -console.warn = createConsoleCapture('WARN'); -console.info = createConsoleCapture('INFO'); - -// Function to clear captured output -function clearCapturedOutput(): void { - capturedOutput.length = 0; -} - -// Function to get all captured output as JSON string -function getCapturedOutput(): string { - return JSON.stringify(capturedOutput, null, 2); -} - -// ==================================================================================================================== -// AGENT - -const target = process.argv[2]; -const dispositionFile = process.argv[3]; -const provider = process.argv[4]; -const apiKey = process.argv[5]; -const model = process.argv[6]; -const url = process.argv[7]; -fs.writeFileSync(dispositionFile, `target: ${target}\nprovider: ${provider}\napiKey: ${apiKey}\nmodel: ${model}\nurl: ${url}\n----log--------------------------------------\n`); - -const agent = startBrowserAgent({ - // Starting URL for agent - url: target || "http://localhost:8888", - // Show thoughts and actions - narrate: true, - // LLM configuration - llm: { - provider: provider, - options: { - model: model, - apiKey: apiKey, - baseUrl: url - } - }, -}).catch(async error => { - console.error(JSON.stringify({ - type: "AGENT", - status: "failed", - description: "starting agent", - reasoning: error.message || error, - })) - fs.appendFileSync(dispositionFile, getCapturedOutput() + "\n"); - process.stdout.write(getCapturedOutput() + "\n"); - await sleep(2000); - process.exit(9); -}); - - -let prompt = "" -let promptset = "" -let promptglobal = "" -let data: Record = {} - -// Function to perform agent action -async function act(order: string): Promise { - // TODO Not sure if magnitude will freak if either data or prompt it empty. We shall see. - - const resolvedAgent = await agent; - await resolvedAgent.act(order, {data: data, prompt: promptglobal + ` ` + promptset + ` ` + prompt}).catch(error => console.error( - JSON.stringify({ - type: "WWWHERD", - status: "failed", - description: order, - reasoning: error.message, - }) - )) - data = {}; - prompt = ""; -} - -const CHECK_INSTRUCTIONS=` -Given the actions of an LLM agent executing a test case, and a screenshot taken afterwards, evaluate whether the provided check "passes" i.e. holds true or not. - -Check to evaluate: -`.trim(); -interface CheckEvents { - 'checkStarted': (check: string) => void; - 'checkDone': (check: string, passed: boolean) => void; -} - -// Check doesn't scroll, we we have to just use act with the correct prompt. -async function check(description: string): Promise { - await act(CHECK_INSTRUCTIONS + description) -} - -// ==================================================================================================================== -// I/O - -const rl: readline.Interface = readline.createInterface({ - input: process.stdin, - output: process.stdout, - terminal: false -}); - -// Raw input state management -let isRawMode = false; -let rawBuffer: string[] = []; - -// Process each line from stdin -rl.on('line', async (line: string): Promise => { - - if (isRawMode) { - if (line.startsWith('##RAW')) { - isRawMode = false; - const raw = rawBuffer.join('\n'); - - try { - // Create an async function that has access to the agent variable - const executeRaw = new Function('agent', 'console', 'data', 'prompt', 'promptset', 'promptglobal', ` - return (async () => { - ${raw} - })(); - `); - - // Execute the raw code with access to the current context variables - await executeRaw(await agent, console, data, prompt, promptset, promptglobal); - - } catch (error) { - console.error(JSON.stringify({ - type: "WWWHERD", - status: "error", - description: "Raw code execution failed", - // reasoning: error.message || "", - //zstack: error.stack || "" - })); - } - - const capturedJson = getCapturedOutput(); - if (capturedJson !== "[]") { - process.stdout.write(capturedJson + '\n'); - //process.stderr.write(capturedJson + '\n'); - } - clearCapturedOutput() - console.log("##RAW DONE"); - } else { - rawBuffer.push(line); - } - - } else { - - // -- ACT ---------------------------------------------------------------------------------------- - if (line.startsWith('##ACT ')) { - console.log("##ACT"); - const data = line.substring(6).trim(); - const [command, name, value] = data.split('|').map(s => s.trim()); - await act(command); - console.log("##ACT DONE"); - - // -- PROMPT ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PROMPT ')) { - console.log("##PROMPT"); - prompt = line.substring(9).trim(); - console.log("GOOD"); - console.log("##PROMPT DONE"); - - // -- PROMPTSET ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PROMPTSET ')) { - console.log("##PROMPTSET"); - promptset = line.substring(12).trim(); - console.log("GOOD"); - console.log("##PROMPTSET DONE"); - - // -- PROMPTCLEAR ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PROMPTCLEAR')) { - console.log("##PROMPTCLEAR"); - promptset = "" - console.log("GOOD"); - console.log("##PROMPTCLEAR DONE"); - - // -- PROMPTGLOBAL ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PROMPTGLOBAL ')) { - console.log("##PROMPTGLOBAL"); - promptglobal = "" - console.log("GOOD"); - console.log("##PROMPTGLOBAL DONE"); - - // -- PROMPTGLOBALCLEAR ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PROMPTGLOBALCLEAR')) { - console.log("##PROMPTGLOBALCLEAR"); - promptglobal = line.substring(19).trim(); - console.log("GOOD"); - console.log("##PROMPTGLOBALCLEAR DONE"); - - // -- DATA ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##DATA ')) { - console.log("##DATA"); - try { - data = JSON.parse(line.substring(7).trim()); - console.log("GOOD"); - } catch (error) { - console.error('Error parsing JSON for ##DATA: ', error); - } - console.log("##DATA DONE"); - - // -- NAV ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##NAV ')) { - console.log("##NAV"); - const resolvedAgent = await agent; - try { - await resolvedAgent.nav(line.substring(6).trim()); - } catch (error) { - console.error('NAV ERROR:', error); - } - console.log("##NAV DONE"); - - // -- CHECK ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##CHECK ')) { - console.log("##CHECK"); - await check(line.substring(8).trim()); - console.log("##CHECK DONE"); - - // -- PING ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##PING')) { - console.log("##PONG"); - console.log("PONG") - console.log("##PONG DONE"); - - // -- RAW ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##RAW')) { - console.log("##RAW"); - isRawMode = true; - rawBuffer = []; - - // -- QUIT ---------------------------------------------------------------------------------------- - } else if (line.startsWith('##QUIT')) { - console.log("##QUIT"); - rl.close(); - process.exit(0); - } - if (line != "") { - // Output captured logs as JSON - const capturedJson = getCapturedOutput(); - if (capturedJson !== "[]") { - process.stdout.write(capturedJson + '\n'); - fs.appendFileSync(dispositionFile, capturedJson + '\n'); - //process.stderr.write(); - } - clearCapturedOutput() - } - } //end if raw -}); - -// Handle when stdin is closed -rl.on('close', (): void => { - fs.appendFileSync(dispositionFile, `Good exit.`); - process.exit(0); -}); - -// Optional: Handle process termination signals -process.on('SIGINT', async (): Promise => { - rl.close(); - await (await agent).stop() - fs.appendFileSync(dispositionFile, `Good exit.`); - process.exit(0); -}); \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index 0415afb207142798b8fc95d5933f010e49153088..5e26b6a94f010808061a371e46b2230e70eca18b 100644 --- a/package-lock.json +++ b/package-lock.json @@ -2,5 +2,1409 @@ "name": "wwwherd", "lockfileVersion": 3, "requires": true, - "packages": {} + "packages": { + "": { + "dependencies": { + "@mdi/font": "^7.4.47", + "@vitejs/plugin-vue": "^6.0.0", + "typescript": "^5.8.2", + "vite": "^7.0.4", + "vue": "^3.5.17", + "vue-tsc": "^2.2.12", + "vuetify": "^3.9.0" + } + }, + "node_modules/@babel/helper-string-parser": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.27.1.tgz", + "integrity": "sha512-qMlSxKbpRlAridDExk92nSobyDdpPijUq2DW6oDnUqd0iOGxmQjyqhMIihI9+zv4LPyZdRje2cavWPbCbWm3eA==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-identifier": { + "version": "7.27.1", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.27.1.tgz", + "integrity": "sha512-D2hP9eA+Sqx1kBZgzxZh0y1trbuU+JoDkiEwqhQ36nodYqJwyEIhPSdMNd7lOm/4io72luTPWH20Yda0xOuUow==", + "license": "MIT", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/parser": { + "version": "7.28.4", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.28.4.tgz", + "integrity": "sha512-yZbBqeM6TkpP9du/I2pUZnJsRMGGvOuIrhjzC1AwHwW+6he4mni6Bp/m8ijn0iOuZuPI2BfkCoSRunpyjnrQKg==", + "license": "MIT", + "dependencies": { + "@babel/types": "^7.28.4" + }, + "bin": { + "parser": "bin/babel-parser.js" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@babel/types": { + "version": "7.28.4", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.28.4.tgz", + "integrity": "sha512-bkFqkLhh3pMBUQQkpVgWDWq/lqzc2678eUyDlTBhRqhCHFguYYGM0Efga7tYk4TogG/3x0EEl66/OQ+WGbWB/Q==", + "license": "MIT", + "dependencies": { + "@babel/helper-string-parser": "^7.27.1", + "@babel/helper-validator-identifier": "^7.27.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@esbuild/aix-ppc64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.25.9.tgz", + "integrity": "sha512-OaGtL73Jck6pBKjNIe24BnFE6agGl+6KxDtTfHhy1HmhthfKouEcOhqpSL64K4/0WCtbKFLOdzD/44cJ4k9opA==", + "cpu": [ + "ppc64" + ], + "license": "MIT", + "optional": true, + "os": [ + "aix" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-arm": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-arm/-/android-arm-0.25.9.tgz", + "integrity": "sha512-5WNI1DaMtxQ7t7B6xa572XMXpHAaI/9Hnhk8lcxF4zVN4xstUgTlvuGDorBguKEnZO70qwEcLpfifMLoxiPqHQ==", + "cpu": [ + "arm" + ], + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-arm64/-/android-arm64-0.25.9.tgz", + "integrity": "sha512-IDrddSmpSv51ftWslJMvl3Q2ZT98fUSL2/rlUXuVqRXHCs5EUF1/f+jbjF5+NG9UffUDMCiTyh8iec7u8RlTLg==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/android-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/android-x64/-/android-x64-0.25.9.tgz", + "integrity": "sha512-I853iMZ1hWZdNllhVZKm34f4wErd4lMyeV7BLzEExGEIZYsOzqDWDf+y082izYUE8gtJnYHdeDpN/6tUdwvfiw==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "android" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/darwin-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/darwin-arm64/-/darwin-arm64-0.25.9.tgz", + "integrity": "sha512-XIpIDMAjOELi/9PB30vEbVMs3GV1v2zkkPnuyRRURbhqjyzIINwj+nbQATh4H9GxUgH1kFsEyQMxwiLFKUS6Rg==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/darwin-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/darwin-x64/-/darwin-x64-0.25.9.tgz", + "integrity": "sha512-jhHfBzjYTA1IQu8VyrjCX4ApJDnH+ez+IYVEoJHeqJm9VhG9Dh2BYaJritkYK3vMaXrf7Ogr/0MQ8/MeIefsPQ==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/freebsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/freebsd-arm64/-/freebsd-arm64-0.25.9.tgz", + "integrity": "sha512-z93DmbnY6fX9+KdD4Ue/H6sYs+bhFQJNCPZsi4XWJoYblUqT06MQUdBCpcSfuiN72AbqeBFu5LVQTjfXDE2A6Q==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/freebsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/freebsd-x64/-/freebsd-x64-0.25.9.tgz", + "integrity": "sha512-mrKX6H/vOyo5v71YfXWJxLVxgy1kyt1MQaD8wZJgJfG4gq4DpQGpgTB74e5yBeQdyMTbgxp0YtNj7NuHN0PoZg==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-arm": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-arm/-/linux-arm-0.25.9.tgz", + "integrity": "sha512-HBU2Xv78SMgaydBmdor38lg8YDnFKSARg1Q6AT0/y2ezUAKiZvc211RDFHlEZRFNRVhcMamiToo7bDx3VEOYQw==", + "cpu": [ + "arm" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-arm64/-/linux-arm64-0.25.9.tgz", + "integrity": "sha512-BlB7bIcLT3G26urh5Dmse7fiLmLXnRlopw4s8DalgZ8ef79Jj4aUcYbk90g8iCa2467HX8SAIidbL7gsqXHdRw==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-ia32": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-ia32/-/linux-ia32-0.25.9.tgz", + "integrity": "sha512-e7S3MOJPZGp2QW6AK6+Ly81rC7oOSerQ+P8L0ta4FhVi+/j/v2yZzx5CqqDaWjtPFfYz21Vi1S0auHrap3Ma3A==", + "cpu": [ + "ia32" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-loong64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-loong64/-/linux-loong64-0.25.9.tgz", + "integrity": "sha512-Sbe10Bnn0oUAB2AalYztvGcK+o6YFFA/9829PhOCUS9vkJElXGdphz0A3DbMdP8gmKkqPmPcMJmJOrI3VYB1JQ==", + "cpu": [ + "loong64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-mips64el": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-mips64el/-/linux-mips64el-0.25.9.tgz", + "integrity": "sha512-YcM5br0mVyZw2jcQeLIkhWtKPeVfAerES5PvOzaDxVtIyZ2NUBZKNLjC5z3/fUlDgT6w89VsxP2qzNipOaaDyA==", + "cpu": [ + "mips64el" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-ppc64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-ppc64/-/linux-ppc64-0.25.9.tgz", + "integrity": "sha512-++0HQvasdo20JytyDpFvQtNrEsAgNG2CY1CLMwGXfFTKGBGQT3bOeLSYE2l1fYdvML5KUuwn9Z8L1EWe2tzs1w==", + "cpu": [ + "ppc64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-riscv64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-riscv64/-/linux-riscv64-0.25.9.tgz", + "integrity": "sha512-uNIBa279Y3fkjV+2cUjx36xkx7eSjb8IvnL01eXUKXez/CBHNRw5ekCGMPM0BcmqBxBcdgUWuUXmVWwm4CH9kg==", + "cpu": [ + "riscv64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-s390x": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-s390x/-/linux-s390x-0.25.9.tgz", + "integrity": "sha512-Mfiphvp3MjC/lctb+7D287Xw1DGzqJPb/J2aHHcHxflUo+8tmN/6d4k6I2yFR7BVo5/g7x2Monq4+Yew0EHRIA==", + "cpu": [ + "s390x" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/linux-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/linux-x64/-/linux-x64-0.25.9.tgz", + "integrity": "sha512-iSwByxzRe48YVkmpbgoxVzn76BXjlYFXC7NvLYq+b+kDjyyk30J0JY47DIn8z1MO3K0oSl9fZoRmZPQI4Hklzg==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/netbsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/netbsd-arm64/-/netbsd-arm64-0.25.9.tgz", + "integrity": "sha512-9jNJl6FqaUG+COdQMjSCGW4QiMHH88xWbvZ+kRVblZsWrkXlABuGdFJ1E9L7HK+T0Yqd4akKNa/lO0+jDxQD4Q==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "netbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/netbsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/netbsd-x64/-/netbsd-x64-0.25.9.tgz", + "integrity": "sha512-RLLdkflmqRG8KanPGOU7Rpg829ZHu8nFy5Pqdi9U01VYtG9Y0zOG6Vr2z4/S+/3zIyOxiK6cCeYNWOFR9QP87g==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "netbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openbsd-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openbsd-arm64/-/openbsd-arm64-0.25.9.tgz", + "integrity": "sha512-YaFBlPGeDasft5IIM+CQAhJAqS3St3nJzDEgsgFixcfZeyGPCd6eJBWzke5piZuZ7CtL656eOSYKk4Ls2C0FRQ==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "openbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openbsd-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openbsd-x64/-/openbsd-x64-0.25.9.tgz", + "integrity": "sha512-1MkgTCuvMGWuqVtAvkpkXFmtL8XhWy+j4jaSO2wxfJtilVCi0ZE37b8uOdMItIHz4I6z1bWWtEX4CJwcKYLcuA==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "openbsd" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/openharmony-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/openharmony-arm64/-/openharmony-arm64-0.25.9.tgz", + "integrity": "sha512-4Xd0xNiMVXKh6Fa7HEJQbrpP3m3DDn43jKxMjxLLRjWnRsfxjORYJlXPO4JNcXtOyfajXorRKY9NkOpTHptErg==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "openharmony" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/sunos-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/sunos-x64/-/sunos-x64-0.25.9.tgz", + "integrity": "sha512-WjH4s6hzo00nNezhp3wFIAfmGZ8U7KtrJNlFMRKxiI9mxEK1scOMAaa9i4crUtu+tBr+0IN6JCuAcSBJZfnphw==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "sunos" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-arm64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-arm64/-/win32-arm64-0.25.9.tgz", + "integrity": "sha512-mGFrVJHmZiRqmP8xFOc6b84/7xa5y5YvR1x8djzXpJBSv/UsNK6aqec+6JDjConTgvvQefdGhFDAs2DLAds6gQ==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-ia32": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-ia32/-/win32-ia32-0.25.9.tgz", + "integrity": "sha512-b33gLVU2k11nVx1OhX3C8QQP6UHQK4ZtN56oFWvVXvz2VkDoe6fbG8TOgHFxEvqeqohmRnIHe5A1+HADk4OQww==", + "cpu": [ + "ia32" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@esbuild/win32-x64": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/@esbuild/win32-x64/-/win32-x64-0.25.9.tgz", + "integrity": "sha512-PPOl1mi6lpLNQxnGoyAfschAodRFYXJ+9fs6WHXz7CSWKbOqiMZsubC+BQsVKuul+3vKLuwTHsS2c2y9EoKwxQ==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ], + "engines": { + "node": ">=18" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.5", + "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.5.tgz", + "integrity": "sha512-cYQ9310grqxueWbl+WuIUIaiUaDcj7WOq5fVhEljNVgRfOUhY9fy2zTvfoqWsnebh8Sl70VScFbICvJnLKB0Og==", + "license": "MIT" + }, + "node_modules/@mdi/font": { + "version": "7.4.47", + "resolved": "https://registry.npmjs.org/@mdi/font/-/font-7.4.47.tgz", + "integrity": "sha512-43MtGpd585SNzHZPcYowu/84Vz2a2g31TvPMTm9uTiCSWzaheQySUcSyUH/46fPnuPQWof2yd0pGBtzee/IQWw==", + "license": "Apache-2.0" + }, + "node_modules/@rolldown/pluginutils": { + "version": "1.0.0-beta.19", + "resolved": "https://registry.npmjs.org/@rolldown/pluginutils/-/pluginutils-1.0.0-beta.19.tgz", + "integrity": "sha512-3FL3mnMbPu0muGOCaKAhhFEYmqv9eTfPSJRJmANrCwtgK8VuxpsZDGK+m0LYAGoyO8+0j5uRe4PeyPDK1yA/hA==", + "license": "MIT" + }, + "node_modules/@rollup/rollup-android-arm-eabi": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm-eabi/-/rollup-android-arm-eabi-4.50.2.tgz", + "integrity": "sha512-uLN8NAiFVIRKX9ZQha8wy6UUs06UNSZ32xj6giK/rmMXAgKahwExvK6SsmgU5/brh4w/nSgj8e0k3c1HBQpa0A==", + "cpu": [ + "arm" + ], + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@rollup/rollup-android-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-android-arm64/-/rollup-android-arm64-4.50.2.tgz", + "integrity": "sha512-oEouqQk2/zxxj22PNcGSskya+3kV0ZKH+nQxuCCOGJ4oTXBdNTbv+f/E3c74cNLeMO1S5wVWacSws10TTSB77g==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "android" + ] + }, + "node_modules/@rollup/rollup-darwin-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-arm64/-/rollup-darwin-arm64-4.50.2.tgz", + "integrity": "sha512-OZuTVTpj3CDSIxmPgGH8en/XtirV5nfljHZ3wrNwvgkT5DQLhIKAeuFSiwtbMto6oVexV0k1F1zqURPKf5rI1Q==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@rollup/rollup-darwin-x64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-darwin-x64/-/rollup-darwin-x64-4.50.2.tgz", + "integrity": "sha512-Wa/Wn8RFkIkr1vy1k1PB//VYhLnlnn5eaJkfTQKivirOvzu5uVd2It01ukeQstMursuz7S1bU+8WW+1UPXpa8A==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ] + }, + "node_modules/@rollup/rollup-freebsd-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-arm64/-/rollup-freebsd-arm64-4.50.2.tgz", + "integrity": "sha512-QkzxvH3kYN9J1w7D1A+yIMdI1pPekD+pWx7G5rXgnIlQ1TVYVC6hLl7SOV9pi5q9uIDF9AuIGkuzcbF7+fAhow==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ] + }, + "node_modules/@rollup/rollup-freebsd-x64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-freebsd-x64/-/rollup-freebsd-x64-4.50.2.tgz", + "integrity": "sha512-dkYXB0c2XAS3a3jmyDkX4Jk0m7gWLFzq1C3qUnJJ38AyxIF5G/dyS4N9B30nvFseCfgtCEdbYFhk0ChoCGxPog==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "freebsd" + ] + }, + "node_modules/@rollup/rollup-linux-arm-gnueabihf": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-gnueabihf/-/rollup-linux-arm-gnueabihf-4.50.2.tgz", + "integrity": "sha512-9VlPY/BN3AgbukfVHAB8zNFWB/lKEuvzRo1NKev0Po8sYFKx0i+AQlCYftgEjcL43F2h9Ui1ZSdVBc4En/sP2w==", + "cpu": [ + "arm" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm-musleabihf": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm-musleabihf/-/rollup-linux-arm-musleabihf-4.50.2.tgz", + "integrity": "sha512-+GdKWOvsifaYNlIVf07QYan1J5F141+vGm5/Y8b9uCZnG/nxoGqgCmR24mv0koIWWuqvFYnbURRqw1lv7IBINw==", + "cpu": [ + "arm" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-gnu/-/rollup-linux-arm64-gnu-4.50.2.tgz", + "integrity": "sha512-df0Eou14ojtUdLQdPFnymEQteENwSJAdLf5KCDrmZNsy1c3YaCNaJvYsEUHnrg+/DLBH612/R0xd3dD03uz2dg==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-arm64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-arm64-musl/-/rollup-linux-arm64-musl-4.50.2.tgz", + "integrity": "sha512-iPeouV0UIDtz8j1YFR4OJ/zf7evjauqv7jQ/EFs0ClIyL+by++hiaDAfFipjOgyz6y6xbDvJuiU4HwpVMpRFDQ==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-loong64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-loong64-gnu/-/rollup-linux-loong64-gnu-4.50.2.tgz", + "integrity": "sha512-OL6KaNvBopLlj5fTa5D5bau4W82f+1TyTZRr2BdnfsrnQnmdxh4okMxR2DcDkJuh4KeoQZVuvHvzuD/lyLn2Kw==", + "cpu": [ + "loong64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-ppc64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-ppc64-gnu/-/rollup-linux-ppc64-gnu-4.50.2.tgz", + "integrity": "sha512-I21VJl1w6z/K5OTRl6aS9DDsqezEZ/yKpbqlvfHbW0CEF5IL8ATBMuUx6/mp683rKTK8thjs/0BaNrZLXetLag==", + "cpu": [ + "ppc64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-riscv64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-gnu/-/rollup-linux-riscv64-gnu-4.50.2.tgz", + "integrity": "sha512-Hq6aQJT/qFFHrYMjS20nV+9SKrXL2lvFBENZoKfoTH2kKDOJqff5OSJr4x72ZaG/uUn+XmBnGhfr4lwMRrmqCQ==", + "cpu": [ + "riscv64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-riscv64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-riscv64-musl/-/rollup-linux-riscv64-musl-4.50.2.tgz", + "integrity": "sha512-82rBSEXRv5qtKyr0xZ/YMF531oj2AIpLZkeNYxmKNN6I2sVE9PGegN99tYDLK2fYHJITL1P2Lgb4ZXnv0PjQvw==", + "cpu": [ + "riscv64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-s390x-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-s390x-gnu/-/rollup-linux-s390x-gnu-4.50.2.tgz", + "integrity": "sha512-4Q3S3Hy7pC6uaRo9gtXUTJ+EKo9AKs3BXKc2jYypEcMQ49gDPFU2P1ariX9SEtBzE5egIX6fSUmbmGazwBVF9w==", + "cpu": [ + "s390x" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-x64-gnu": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-gnu/-/rollup-linux-x64-gnu-4.50.2.tgz", + "integrity": "sha512-9Jie/At6qk70dNIcopcL4p+1UirusEtznpNtcq/u/C5cC4HBX7qSGsYIcG6bdxj15EYWhHiu02YvmdPzylIZlA==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-linux-x64-musl": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-linux-x64-musl/-/rollup-linux-x64-musl-4.50.2.tgz", + "integrity": "sha512-HPNJwxPL3EmhzeAnsWQCM3DcoqOz3/IC6de9rWfGR8ZCuEHETi9km66bH/wG3YH0V3nyzyFEGUZeL5PKyy4xvw==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "linux" + ] + }, + "node_modules/@rollup/rollup-openharmony-arm64": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-openharmony-arm64/-/rollup-openharmony-arm64-4.50.2.tgz", + "integrity": "sha512-nMKvq6FRHSzYfKLHZ+cChowlEkR2lj/V0jYj9JnGUVPL2/mIeFGmVM2mLaFeNa5Jev7W7TovXqXIG2d39y1KYA==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "openharmony" + ] + }, + "node_modules/@rollup/rollup-win32-arm64-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-arm64-msvc/-/rollup-win32-arm64-msvc-4.50.2.tgz", + "integrity": "sha512-eFUvvnTYEKeTyHEijQKz81bLrUQOXKZqECeiWH6tb8eXXbZk+CXSG2aFrig2BQ/pjiVRj36zysjgILkqarS2YA==", + "cpu": [ + "arm64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@rollup/rollup-win32-ia32-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-ia32-msvc/-/rollup-win32-ia32-msvc-4.50.2.tgz", + "integrity": "sha512-cBaWmXqyfRhH8zmUxK3d3sAhEWLrtMjWBRwdMMHJIXSjvjLKvv49adxiEz+FJ8AP90apSDDBx2Tyd/WylV6ikA==", + "cpu": [ + "ia32" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@rollup/rollup-win32-x64-msvc": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/@rollup/rollup-win32-x64-msvc/-/rollup-win32-x64-msvc-4.50.2.tgz", + "integrity": "sha512-APwKy6YUhvZaEoHyM+9xqmTpviEI+9eL7LoCH+aLcvWYHJ663qG5zx7WzWZY+a9qkg5JtzcMyJ9z0WtQBMDmgA==", + "cpu": [ + "x64" + ], + "license": "MIT", + "optional": true, + "os": [ + "win32" + ] + }, + "node_modules/@types/estree": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz", + "integrity": "sha512-dWHzHa2WqEXI/O1E9OjrocMTKJl2mSrEolh1Iomrv6U+JuNwaHXsXx9bLu5gG7BUWFIN0skIQJQ/L1rIex4X6w==", + "license": "MIT" + }, + "node_modules/@vitejs/plugin-vue": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/@vitejs/plugin-vue/-/plugin-vue-6.0.0.tgz", + "integrity": "sha512-iAliE72WsdhjzTOp2DtvKThq1VBC4REhwRcaA+zPAAph6I+OQhUXv+Xu2KS7ElxYtb7Zc/3R30Hwv1DxEo7NXQ==", + "license": "MIT", + "dependencies": { + "@rolldown/pluginutils": "1.0.0-beta.19" + }, + "engines": { + "node": "^20.19.0 || >=22.12.0" + }, + "peerDependencies": { + "vite": "^5.0.0 || ^6.0.0 || ^7.0.0", + "vue": "^3.2.25" + } + }, + "node_modules/@volar/language-core": { + "version": "2.4.15", + "resolved": "https://registry.npmjs.org/@volar/language-core/-/language-core-2.4.15.tgz", + "integrity": "sha512-3VHw+QZU0ZG9IuQmzT68IyN4hZNd9GchGPhbD9+pa8CVv7rnoOZwo7T8weIbrRmihqy3ATpdfXFnqRrfPVK6CA==", + "license": "MIT", + "dependencies": { + "@volar/source-map": "2.4.15" + } + }, + "node_modules/@volar/source-map": { + "version": "2.4.15", + "resolved": "https://registry.npmjs.org/@volar/source-map/-/source-map-2.4.15.tgz", + "integrity": "sha512-CPbMWlUN6hVZJYGcU/GSoHu4EnCHiLaXI9n8c9la6RaI9W5JHX+NqG+GSQcB0JdC2FIBLdZJwGsfKyBB71VlTg==", + "license": "MIT" + }, + "node_modules/@volar/typescript": { + "version": "2.4.15", + "resolved": "https://registry.npmjs.org/@volar/typescript/-/typescript-2.4.15.tgz", + "integrity": "sha512-2aZ8i0cqPGjXb4BhkMsPYDkkuc2ZQ6yOpqwAuNwUoncELqoy5fRgOQtLR9gB0g902iS0NAkvpIzs27geVyVdPg==", + "license": "MIT", + "dependencies": { + "@volar/language-core": "2.4.15", + "path-browserify": "^1.0.1", + "vscode-uri": "^3.0.8" + } + }, + "node_modules/@vue/compiler-core": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/compiler-core/-/compiler-core-3.5.17.tgz", + "integrity": "sha512-Xe+AittLbAyV0pabcN7cP7/BenRBNcteM4aSDCtRvGw0d9OL+HG1u/XHLY/kt1q4fyMeZYXyIYrsHuPSiDPosA==", + "license": "MIT", + "dependencies": { + "@babel/parser": "^7.27.5", + "@vue/shared": "3.5.17", + "entities": "^4.5.0", + "estree-walker": "^2.0.2", + "source-map-js": "^1.2.1" + } + }, + "node_modules/@vue/compiler-dom": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/compiler-dom/-/compiler-dom-3.5.17.tgz", + "integrity": "sha512-+2UgfLKoaNLhgfhV5Ihnk6wB4ljyW1/7wUIog2puUqajiC29Lp5R/IKDdkebh9jTbTogTbsgB+OY9cEWzG95JQ==", + "license": "MIT", + "dependencies": { + "@vue/compiler-core": "3.5.17", + "@vue/shared": "3.5.17" + } + }, + "node_modules/@vue/compiler-sfc": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/compiler-sfc/-/compiler-sfc-3.5.17.tgz", + "integrity": "sha512-rQQxbRJMgTqwRugtjw0cnyQv9cP4/4BxWfTdRBkqsTfLOHWykLzbOc3C4GGzAmdMDxhzU/1Ija5bTjMVrddqww==", + "license": "MIT", + "dependencies": { + "@babel/parser": "^7.27.5", + "@vue/compiler-core": "3.5.17", + "@vue/compiler-dom": "3.5.17", + "@vue/compiler-ssr": "3.5.17", + "@vue/shared": "3.5.17", + "estree-walker": "^2.0.2", + "magic-string": "^0.30.17", + "postcss": "^8.5.6", + "source-map-js": "^1.2.1" + } + }, + "node_modules/@vue/compiler-ssr": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/compiler-ssr/-/compiler-ssr-3.5.17.tgz", + "integrity": "sha512-hkDbA0Q20ZzGgpj5uZjb9rBzQtIHLS78mMilwrlpWk2Ep37DYntUz0PonQ6kr113vfOEdM+zTBuJDaceNIW0tQ==", + "license": "MIT", + "dependencies": { + "@vue/compiler-dom": "3.5.17", + "@vue/shared": "3.5.17" + } + }, + "node_modules/@vue/compiler-vue2": { + "version": "2.7.16", + "resolved": "https://registry.npmjs.org/@vue/compiler-vue2/-/compiler-vue2-2.7.16.tgz", + "integrity": "sha512-qYC3Psj9S/mfu9uVi5WvNZIzq+xnXMhOwbTFKKDD7b1lhpnn71jXSFdTQ+WsIEk0ONCd7VV2IMm7ONl6tbQ86A==", + "license": "MIT", + "dependencies": { + "de-indent": "^1.0.2", + "he": "^1.2.0" + } + }, + "node_modules/@vue/language-core": { + "version": "2.2.12", + "resolved": "https://registry.npmjs.org/@vue/language-core/-/language-core-2.2.12.tgz", + "integrity": "sha512-IsGljWbKGU1MZpBPN+BvPAdr55YPkj2nB/TBNGNC32Vy2qLG25DYu/NBN2vNtZqdRbTRjaoYrahLrToim2NanA==", + "license": "MIT", + "dependencies": { + "@volar/language-core": "2.4.15", + "@vue/compiler-dom": "^3.5.0", + "@vue/compiler-vue2": "^2.7.16", + "@vue/shared": "^3.5.0", + "alien-signals": "^1.0.3", + "minimatch": "^9.0.3", + "muggle-string": "^0.4.1", + "path-browserify": "^1.0.1" + }, + "peerDependencies": { + "typescript": "*" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/@vue/reactivity": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/reactivity/-/reactivity-3.5.17.tgz", + "integrity": "sha512-l/rmw2STIscWi7SNJp708FK4Kofs97zc/5aEPQh4bOsReD/8ICuBcEmS7KGwDj5ODQLYWVN2lNibKJL1z5b+Lw==", + "license": "MIT", + "dependencies": { + "@vue/shared": "3.5.17" + } + }, + "node_modules/@vue/runtime-core": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/runtime-core/-/runtime-core-3.5.17.tgz", + "integrity": "sha512-QQLXa20dHg1R0ri4bjKeGFKEkJA7MMBxrKo2G+gJikmumRS7PTD4BOU9FKrDQWMKowz7frJJGqBffYMgQYS96Q==", + "license": "MIT", + "dependencies": { + "@vue/reactivity": "3.5.17", + "@vue/shared": "3.5.17" + } + }, + "node_modules/@vue/runtime-dom": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/runtime-dom/-/runtime-dom-3.5.17.tgz", + "integrity": "sha512-8El0M60TcwZ1QMz4/os2MdlQECgGoVHPuLnQBU3m9h3gdNRW9xRmI8iLS4t/22OQlOE6aJvNNlBiCzPHur4H9g==", + "license": "MIT", + "dependencies": { + "@vue/reactivity": "3.5.17", + "@vue/runtime-core": "3.5.17", + "@vue/shared": "3.5.17", + "csstype": "^3.1.3" + } + }, + "node_modules/@vue/server-renderer": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/server-renderer/-/server-renderer-3.5.17.tgz", + "integrity": "sha512-BOHhm8HalujY6lmC3DbqF6uXN/K00uWiEeF22LfEsm9Q93XeJ/plHTepGwf6tqFcF7GA5oGSSAAUock3VvzaCA==", + "license": "MIT", + "dependencies": { + "@vue/compiler-ssr": "3.5.17", + "@vue/shared": "3.5.17" + }, + "peerDependencies": { + "vue": "3.5.17" + } + }, + "node_modules/@vue/shared": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/@vue/shared/-/shared-3.5.17.tgz", + "integrity": "sha512-CabR+UN630VnsJO/jHWYBC1YVXyMq94KKp6iF5MQgZJs5I8cmjw6oVMO1oDbtBkENSHSSn/UadWlW/OAgdmKrg==", + "license": "MIT" + }, + "node_modules/alien-signals": { + "version": "1.0.13", + "resolved": "https://registry.npmjs.org/alien-signals/-/alien-signals-1.0.13.tgz", + "integrity": "sha512-OGj9yyTnJEttvzhTUWuscOvtqxq5vrhF7vL9oS0xJ2mK0ItPYP1/y+vCFebfxoEyAz0++1AIwJ5CMr+Fk3nDmg==", + "license": "MIT" + }, + "node_modules/balanced-match": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", + "integrity": "sha512-3oSeUO0TMV67hN1AmbXsK4yaqU7tjiHlbxRDZOpH0KW9+CeX4bRAaX0Anxt0tx2MrpRpWwQaPwIlISEJhYU5Pw==", + "license": "MIT" + }, + "node_modules/brace-expansion": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-2.0.2.tgz", + "integrity": "sha512-Jt0vHyM+jmUBqojB7E1NIYadt0vI0Qxjxd2TErW94wDz+E2LAm5vKMXXwg6ZZBTHPuUlDgQHKXvjGBdfcF1ZDQ==", + "license": "MIT", + "dependencies": { + "balanced-match": "^1.0.0" + } + }, + "node_modules/csstype": { + "version": "3.1.3", + "resolved": "https://registry.npmjs.org/csstype/-/csstype-3.1.3.tgz", + "integrity": "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==", + "license": "MIT" + }, + "node_modules/de-indent": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/de-indent/-/de-indent-1.0.2.tgz", + "integrity": "sha512-e/1zu3xH5MQryN2zdVaF0OrdNLUbvWxzMbi+iNA6Bky7l1RoP8a2fIbRocyHclXt/arDrrR6lL3TqFD9pMQTsg==", + "license": "MIT" + }, + "node_modules/entities": { + "version": "4.5.0", + "resolved": "https://registry.npmjs.org/entities/-/entities-4.5.0.tgz", + "integrity": "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw==", + "license": "BSD-2-Clause", + "engines": { + "node": ">=0.12" + }, + "funding": { + "url": "https://github.com/fb55/entities?sponsor=1" + } + }, + "node_modules/esbuild": { + "version": "0.25.9", + "resolved": "https://registry.npmjs.org/esbuild/-/esbuild-0.25.9.tgz", + "integrity": "sha512-CRbODhYyQx3qp7ZEwzxOk4JBqmD/seJrzPa/cGjY1VtIn5E09Oi9/dB4JwctnfZ8Q8iT7rioVv5k/FNT/uf54g==", + "hasInstallScript": true, + "license": "MIT", + "bin": { + "esbuild": "bin/esbuild" + }, + "engines": { + "node": ">=18" + }, + "optionalDependencies": { + "@esbuild/aix-ppc64": "0.25.9", + "@esbuild/android-arm": "0.25.9", + "@esbuild/android-arm64": "0.25.9", + "@esbuild/android-x64": "0.25.9", + "@esbuild/darwin-arm64": "0.25.9", + "@esbuild/darwin-x64": "0.25.9", + "@esbuild/freebsd-arm64": "0.25.9", + "@esbuild/freebsd-x64": "0.25.9", + "@esbuild/linux-arm": "0.25.9", + "@esbuild/linux-arm64": "0.25.9", + "@esbuild/linux-ia32": "0.25.9", + "@esbuild/linux-loong64": "0.25.9", + "@esbuild/linux-mips64el": "0.25.9", + "@esbuild/linux-ppc64": "0.25.9", + "@esbuild/linux-riscv64": "0.25.9", + "@esbuild/linux-s390x": "0.25.9", + "@esbuild/linux-x64": "0.25.9", + "@esbuild/netbsd-arm64": "0.25.9", + "@esbuild/netbsd-x64": "0.25.9", + "@esbuild/openbsd-arm64": "0.25.9", + "@esbuild/openbsd-x64": "0.25.9", + "@esbuild/openharmony-arm64": "0.25.9", + "@esbuild/sunos-x64": "0.25.9", + "@esbuild/win32-arm64": "0.25.9", + "@esbuild/win32-ia32": "0.25.9", + "@esbuild/win32-x64": "0.25.9" + } + }, + "node_modules/estree-walker": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/estree-walker/-/estree-walker-2.0.2.tgz", + "integrity": "sha512-Rfkk/Mp/DL7JVje3u18FxFujQlTNR2q6QfMSMB7AvCBx91NGj/ba3kCfza0f6dVDbw7YlRf/nDrn7pQrCCyQ/w==", + "license": "MIT" + }, + "node_modules/fdir": { + "version": "6.5.0", + "resolved": "https://registry.npmjs.org/fdir/-/fdir-6.5.0.tgz", + "integrity": "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg==", + "license": "MIT", + "engines": { + "node": ">=12.0.0" + }, + "peerDependencies": { + "picomatch": "^3 || ^4" + }, + "peerDependenciesMeta": { + "picomatch": { + "optional": true + } + } + }, + "node_modules/fsevents": { + "version": "2.3.3", + "resolved": "https://registry.npmjs.org/fsevents/-/fsevents-2.3.3.tgz", + "integrity": "sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==", + "hasInstallScript": true, + "license": "MIT", + "optional": true, + "os": [ + "darwin" + ], + "engines": { + "node": "^8.16.0 || ^10.6.0 || >=11.0.0" + } + }, + "node_modules/he": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/he/-/he-1.2.0.tgz", + "integrity": "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw==", + "license": "MIT", + "bin": { + "he": "bin/he" + } + }, + "node_modules/magic-string": { + "version": "0.30.19", + "resolved": "https://registry.npmjs.org/magic-string/-/magic-string-0.30.19.tgz", + "integrity": "sha512-2N21sPY9Ws53PZvsEpVtNuSW+ScYbQdp4b9qUaL+9QkHUrGFKo56Lg9Emg5s9V/qrtNBmiR01sYhUOwu3H+VOw==", + "license": "MIT", + "dependencies": { + "@jridgewell/sourcemap-codec": "^1.5.5" + } + }, + "node_modules/minimatch": { + "version": "9.0.5", + "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-9.0.5.tgz", + "integrity": "sha512-G6T0ZX48xgozx7587koeX9Ys2NYy6Gmv//P89sEte9V9whIapMNF4idKxnW2QtCcLiTWlb/wfCabAtAFWhhBow==", + "license": "ISC", + "dependencies": { + "brace-expansion": "^2.0.1" + }, + "engines": { + "node": ">=16 || 14 >=14.17" + }, + "funding": { + "url": "https://github.com/sponsors/isaacs" + } + }, + "node_modules/muggle-string": { + "version": "0.4.1", + "resolved": "https://registry.npmjs.org/muggle-string/-/muggle-string-0.4.1.tgz", + "integrity": "sha512-VNTrAak/KhO2i8dqqnqnAHOa3cYBwXEZe9h+D5h/1ZqFSTEFHdM65lR7RoIqq3tBBYavsOXV84NoHXZ0AkPyqQ==", + "license": "MIT" + }, + "node_modules/nanoid": { + "version": "3.3.11", + "resolved": "https://registry.npmjs.org/nanoid/-/nanoid-3.3.11.tgz", + "integrity": "sha512-N8SpfPUnUp1bK+PMYW8qSWdl9U+wwNWI4QKxOYDy9JAro3WMX7p2OeVRF9v+347pnakNevPmiHhNmZ2HbFA76w==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "bin": { + "nanoid": "bin/nanoid.cjs" + }, + "engines": { + "node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1" + } + }, + "node_modules/path-browserify": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/path-browserify/-/path-browserify-1.0.1.tgz", + "integrity": "sha512-b7uo2UCUOYZcnF/3ID0lulOJi/bafxa1xPe7ZPsammBSpjSWQkjNxlt635YGS2MiR9GjvuXCtz2emr3jbsz98g==", + "license": "MIT" + }, + "node_modules/picocolors": { + "version": "1.1.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz", + "integrity": "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA==", + "license": "ISC" + }, + "node_modules/picomatch": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-4.0.3.tgz", + "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", + "license": "MIT", + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/postcss": { + "version": "8.5.6", + "resolved": "https://registry.npmjs.org/postcss/-/postcss-8.5.6.tgz", + "integrity": "sha512-3Ybi1tAuwAP9s0r1UQ2J4n5Y0G05bJkpUIO0/bI9MhwmD70S5aTWbXGBwxHrelT+XM1k6dM0pk+SwNkpTRN7Pg==", + "funding": [ + { + "type": "opencollective", + "url": "https://opencollective.com/postcss/" + }, + { + "type": "tidelift", + "url": "https://tidelift.com/funding/github/npm/postcss" + }, + { + "type": "github", + "url": "https://github.com/sponsors/ai" + } + ], + "license": "MIT", + "dependencies": { + "nanoid": "^3.3.11", + "picocolors": "^1.1.1", + "source-map-js": "^1.2.1" + }, + "engines": { + "node": "^10 || ^12 || >=14" + } + }, + "node_modules/rollup": { + "version": "4.50.2", + "resolved": "https://registry.npmjs.org/rollup/-/rollup-4.50.2.tgz", + "integrity": "sha512-BgLRGy7tNS9H66aIMASq1qSYbAAJV6Z6WR4QYTvj5FgF15rZ/ympT1uixHXwzbZUBDbkvqUI1KR0fH1FhMaQ9w==", + "license": "MIT", + "dependencies": { + "@types/estree": "1.0.8" + }, + "bin": { + "rollup": "dist/bin/rollup" + }, + "engines": { + "node": ">=18.0.0", + "npm": ">=8.0.0" + }, + "optionalDependencies": { + "@rollup/rollup-android-arm-eabi": "4.50.2", + "@rollup/rollup-android-arm64": "4.50.2", + "@rollup/rollup-darwin-arm64": "4.50.2", + "@rollup/rollup-darwin-x64": "4.50.2", + "@rollup/rollup-freebsd-arm64": "4.50.2", + "@rollup/rollup-freebsd-x64": "4.50.2", + "@rollup/rollup-linux-arm-gnueabihf": "4.50.2", + "@rollup/rollup-linux-arm-musleabihf": "4.50.2", + "@rollup/rollup-linux-arm64-gnu": "4.50.2", + "@rollup/rollup-linux-arm64-musl": "4.50.2", + "@rollup/rollup-linux-loong64-gnu": "4.50.2", + "@rollup/rollup-linux-ppc64-gnu": "4.50.2", + "@rollup/rollup-linux-riscv64-gnu": "4.50.2", + "@rollup/rollup-linux-riscv64-musl": "4.50.2", + "@rollup/rollup-linux-s390x-gnu": "4.50.2", + "@rollup/rollup-linux-x64-gnu": "4.50.2", + "@rollup/rollup-linux-x64-musl": "4.50.2", + "@rollup/rollup-openharmony-arm64": "4.50.2", + "@rollup/rollup-win32-arm64-msvc": "4.50.2", + "@rollup/rollup-win32-ia32-msvc": "4.50.2", + "@rollup/rollup-win32-x64-msvc": "4.50.2", + "fsevents": "~2.3.2" + } + }, + "node_modules/source-map-js": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/source-map-js/-/source-map-js-1.2.1.tgz", + "integrity": "sha512-UXWMKhLOwVKb728IUtQPXxfYU+usdybtUrK/8uGE8CQMvrhOpwvzDBwj0QhSL7MQc7vIsISBG8VQ8+IDQxpfQA==", + "license": "BSD-3-Clause", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/tinyglobby": { + "version": "0.2.15", + "resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz", + "integrity": "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==", + "license": "MIT", + "dependencies": { + "fdir": "^6.5.0", + "picomatch": "^4.0.3" + }, + "engines": { + "node": ">=12.0.0" + }, + "funding": { + "url": "https://github.com/sponsors/SuperchupuDev" + } + }, + "node_modules/typescript": { + "version": "5.8.2", + "resolved": "https://registry.npmjs.org/typescript/-/typescript-5.8.2.tgz", + "integrity": "sha512-aJn6wq13/afZp/jT9QZmwEjDqqvSGp1VT5GVg+f/t6/oVyrgXM6BY1h9BRh/O5p3PlUPAe+WuiEZOmb/49RqoQ==", + "license": "Apache-2.0", + "bin": { + "tsc": "bin/tsc", + "tsserver": "bin/tsserver" + }, + "engines": { + "node": ">=14.17" + } + }, + "node_modules/vite": { + "version": "7.0.4", + "resolved": "https://registry.npmjs.org/vite/-/vite-7.0.4.tgz", + "integrity": "sha512-SkaSguuS7nnmV7mfJ8l81JGBFV7Gvzp8IzgE8A8t23+AxuNX61Q5H1Tpz5efduSN7NHC8nQXD3sKQKZAu5mNEA==", + "license": "MIT", + "dependencies": { + "esbuild": "^0.25.0", + "fdir": "^6.4.6", + "picomatch": "^4.0.2", + "postcss": "^8.5.6", + "rollup": "^4.40.0", + "tinyglobby": "^0.2.14" + }, + "bin": { + "vite": "bin/vite.js" + }, + "engines": { + "node": "^20.19.0 || >=22.12.0" + }, + "funding": { + "url": "https://github.com/vitejs/vite?sponsor=1" + }, + "optionalDependencies": { + "fsevents": "~2.3.3" + }, + "peerDependencies": { + "@types/node": "^20.19.0 || >=22.12.0", + "jiti": ">=1.21.0", + "less": "^4.0.0", + "lightningcss": "^1.21.0", + "sass": "^1.70.0", + "sass-embedded": "^1.70.0", + "stylus": ">=0.54.8", + "sugarss": "^5.0.0", + "terser": "^5.16.0", + "tsx": "^4.8.1", + "yaml": "^2.4.2" + }, + "peerDependenciesMeta": { + "@types/node": { + "optional": true + }, + "jiti": { + "optional": true + }, + "less": { + "optional": true + }, + "lightningcss": { + "optional": true + }, + "sass": { + "optional": true + }, + "sass-embedded": { + "optional": true + }, + "stylus": { + "optional": true + }, + "sugarss": { + "optional": true + }, + "terser": { + "optional": true + }, + "tsx": { + "optional": true + }, + "yaml": { + "optional": true + } + } + }, + "node_modules/vscode-uri": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/vscode-uri/-/vscode-uri-3.1.0.tgz", + "integrity": "sha512-/BpdSx+yCQGnCvecbyXdxHDkuk55/G3xwnC0GqY4gmQ3j+A+g8kzzgB4Nk/SINjqn6+waqw3EgbVF2QKExkRxQ==", + "license": "MIT" + }, + "node_modules/vue": { + "version": "3.5.17", + "resolved": "https://registry.npmjs.org/vue/-/vue-3.5.17.tgz", + "integrity": "sha512-LbHV3xPN9BeljML+Xctq4lbz2lVHCR6DtbpTf5XIO6gugpXUN49j2QQPcMj086r9+AkJ0FfUT8xjulKKBkkr9g==", + "license": "MIT", + "dependencies": { + "@vue/compiler-dom": "3.5.17", + "@vue/compiler-sfc": "3.5.17", + "@vue/runtime-dom": "3.5.17", + "@vue/server-renderer": "3.5.17", + "@vue/shared": "3.5.17" + }, + "peerDependencies": { + "typescript": "*" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + } + } + }, + "node_modules/vue-tsc": { + "version": "2.2.12", + "resolved": "https://registry.npmjs.org/vue-tsc/-/vue-tsc-2.2.12.tgz", + "integrity": "sha512-P7OP77b2h/Pmk+lZdJ0YWs+5tJ6J2+uOQPo7tlBnY44QqQSPYvS0qVT4wqDJgwrZaLe47etJLLQRFia71GYITw==", + "license": "MIT", + "dependencies": { + "@volar/typescript": "2.4.15", + "@vue/language-core": "2.2.12" + }, + "bin": { + "vue-tsc": "bin/vue-tsc.js" + }, + "peerDependencies": { + "typescript": ">=5.0.0" + } + }, + "node_modules/vuetify": { + "version": "3.9.0", + "resolved": "https://registry.npmjs.org/vuetify/-/vuetify-3.9.0.tgz", + "integrity": "sha512-vjqyHP5gBFH4x0BAjdRAcS3FXY5OfHaKnC6Hhgln8tePZtKc3AUhF7BEJtcrD3l6XwL8gaYx/wMt+UP7X5EZJw==", + "license": "MIT", + "engines": { + "node": "^12.20 || >=14.13" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/johnleider" + }, + "peerDependencies": { + "typescript": ">=4.7", + "vite-plugin-vuetify": ">=2.1.0", + "vue": "^3.5.0", + "webpack-plugin-vuetify": ">=3.1.0" + }, + "peerDependenciesMeta": { + "typescript": { + "optional": true + }, + "vite-plugin-vuetify": { + "optional": true + }, + "webpack-plugin-vuetify": { + "optional": true + } + } + } + } } diff --git a/package.json b/package.json new file mode 100644 index 0000000000000000000000000000000000000000..8c0c6f5d7ca6c0bc84aeb5063dac8e2ffee00a0a --- /dev/null +++ b/package.json @@ -0,0 +1,11 @@ +{ + "dependencies": { + "@mdi/font": "^7.4.47", + "@vitejs/plugin-vue": "^6.0.0", + "typescript": "^5.8.2", + "vite": "^7.0.4", + "vue": "^3.5.17", + "vue-tsc": "^2.2.12", + "vuetify": "^3.9.0" + } +} diff --git a/magnitude/.gitignore b/runner/.gitignore similarity index 97% rename from magnitude/.gitignore rename to runner/.gitignore index 59cc251f4482720ab1b60f325fbe5678ceb534a4..22d1a8890d10b23b340d87e639e77e86b3de5502 100644 --- a/magnitude/.gitignore +++ b/runner/.gitignore @@ -47,6 +47,6 @@ tests_output/ dist/* -magnitude/ +runner/ tmp/ diff --git a/magnitude/README.md b/runner/README.md similarity index 98% rename from magnitude/README.md rename to runner/README.md index bd545b473e96c8f269af37da3325e6842c247aa1..61bbc32134d9f3093b9d731c3b7a76e718bc7981 100644 --- a/magnitude/README.md +++ b/runner/README.md @@ -48,7 +48,7 @@ See the [additional documentation](doc/Specifics) for more information. | doc/ | Additional documentation. | | local/ | Data, values and tools specific to your service. | | providers/ | Service providers that are unique to your service. | -| magnitude/ | Created and used by the service for working files. | +| runner/ | Created and used by the service for working files. | | service/ | Your service implementation. This is where you will implement the interfaces defined in your API specification and tests for it. | | service/private.go | Server overhead. Generally you can leave it alone. | | test/ | Data and configuration for local testing. | diff --git a/magnitude/api/README.md b/runner/api/README.md similarity index 100% rename from magnitude/api/README.md rename to runner/api/README.md diff --git a/magnitude/api/service/client/service_client.gen.go b/runner/api/service/client/service_client.gen.go similarity index 99% rename from magnitude/api/service/client/service_client.gen.go rename to runner/api/service/client/service_client.gen.go index 3a892c06d9b2fe148cbd854a1a96492f2bec6d47..045a077903cc0f2bcf9e7eb870639e2c7ab617a0 100644 --- a/magnitude/api/service/client/service_client.gen.go +++ b/runner/api/service/client/service_client.gen.go @@ -18,7 +18,7 @@ import ( "github.com/getkin/kin-openapi/openapi3" "github.com/oapi-codegen/runtime" - . "gitlab.com/ginfra/wwwherd/magnitude/api/service/models" + . "gitlab.com/ginfra/wwwherd/runner/api/service/models" ) // RequestEditorFn is the function signature for the RequestEditor callback function diff --git a/magnitude/api/service/models/service_models.gen.go b/runner/api/service/models/service_models.gen.go similarity index 100% rename from magnitude/api/service/models/service_models.gen.go rename to runner/api/service/models/service_models.gen.go diff --git a/magnitude/api/service/server/service_server.gen.go b/runner/api/service/server/service_server.gen.go similarity index 99% rename from magnitude/api/service/server/service_server.gen.go rename to runner/api/service/server/service_server.gen.go index 49e4a4557f8f8390ef77922b6a668ce86f026f30..a6ee64a8b4ab0848885795c0ce124495d6cf5cc6 100644 --- a/magnitude/api/service/server/service_server.gen.go +++ b/runner/api/service/server/service_server.gen.go @@ -20,7 +20,7 @@ import ( "github.com/labstack/echo/v4" "github.com/oapi-codegen/runtime" strictecho "github.com/oapi-codegen/runtime/strictmiddleware/echo" - . "gitlab.com/ginfra/wwwherd/magnitude/api/service/models" + . "gitlab.com/ginfra/wwwherd/runner/api/service/models" ) // ServerInterface represents all server handlers. diff --git a/magnitude/api/service/service.yaml b/runner/api/service/service.yaml similarity index 100% rename from magnitude/api/service/service.yaml rename to runner/api/service/service.yaml diff --git a/magnitude/build/Dockerfile b/runner/build/Dockerfile similarity index 100% rename from magnitude/build/Dockerfile rename to runner/build/Dockerfile diff --git a/magnitude/build/container_entrypoint.sh b/runner/build/container_entrypoint.sh similarity index 100% rename from magnitude/build/container_entrypoint.sh rename to runner/build/container_entrypoint.sh diff --git a/magnitude/build/custom_entrypoint.sh b/runner/build/custom_entrypoint.sh similarity index 100% rename from magnitude/build/custom_entrypoint.sh rename to runner/build/custom_entrypoint.sh diff --git a/magnitude/build/custom_provision.sh b/runner/build/custom_provision.sh similarity index 100% rename from magnitude/build/custom_provision.sh rename to runner/build/custom_provision.sh diff --git a/magnitude/build/custom_verify.sh b/runner/build/custom_verify.sh similarity index 100% rename from magnitude/build/custom_verify.sh rename to runner/build/custom_verify.sh diff --git a/magnitude/build/debugging.sh b/runner/build/debugging.sh similarity index 100% rename from magnitude/build/debugging.sh rename to runner/build/debugging.sh diff --git a/magnitude/build/dockertask.yaml b/runner/build/dockertask.yaml similarity index 62% rename from magnitude/build/dockertask.yaml rename to runner/build/dockertask.yaml index 600cdb9bac9c303b92abad5de694b7f661d19136..6b55858467ea151bc16c6fa6c5ab08611e6defa5 100644 --- a/magnitude/build/dockertask.yaml +++ b/runner/build/dockertask.yaml @@ -2,5 +2,5 @@ version: '3' vars: GINFRA_DOCKER_REPO: "" - GINFRA_DOCKER_PATH: "wwwherd/magnitude" + GINFRA_DOCKER_PATH: "wwwherd/runner" GINFRA_DOCKER_TAG: "" diff --git a/magnitude/build/files/README.md b/runner/build/files/README.md similarity index 100% rename from magnitude/build/files/README.md rename to runner/build/files/README.md diff --git a/magnitude/build/provision.sh b/runner/build/provision.sh similarity index 100% rename from magnitude/build/provision.sh rename to runner/build/provision.sh diff --git a/magnitude/build/seed_repo.cmd b/runner/build/seed_repo.cmd similarity index 100% rename from magnitude/build/seed_repo.cmd rename to runner/build/seed_repo.cmd diff --git a/magnitude/build/seed_repo.sh b/runner/build/seed_repo.sh similarity index 100% rename from magnitude/build/seed_repo.sh rename to runner/build/seed_repo.sh diff --git a/magnitude/build/service_config.json.NEW b/runner/build/service_config.json.NEW similarity index 53% rename from magnitude/build/service_config.json.NEW rename to runner/build/service_config.json.NEW index cfda3021cf4ca4212c0820e0afc11640e8a9a491..e4da2c573f0469004a68a84af7d9c2010c526251 100644 --- a/magnitude/build/service_config.json.NEW +++ b/runner/build/service_config.json.NEW @@ -1 +1 @@ -{"id":"local","auth":"AAAAAAAAAAAA","config_access_token":"AAAAAAAAAAAA","config_host_uri":"file://test/config.json","config_uri":"file://test/config.json","deployment":"default","name":"magnitude","port":8900,"specifics":[],"type":"local"} +{"id":"local","auth":"AAAAAAAAAAAA","config_access_token":"AAAAAAAAAAAA","config_host_uri":"file://test/config.json","config_uri":"file://test/config.json","deployment":"default","name":"runner","port":8900,"specifics":[],"type":"local"} diff --git a/magnitude/build/test.yaml b/runner/build/test.yaml similarity index 90% rename from magnitude/build/test.yaml rename to runner/build/test.yaml index 0e43b30e2faa1f587635f9b20b9d18f0cc1fcc43..3e9d53252b240089f89c596b56da8a0860979c99 100644 --- a/magnitude/build/test.yaml +++ b/runner/build/test.yaml @@ -8,7 +8,7 @@ Workflow: Source: Hosting.json Success: Configuration provider started. - Ginfra: - Cmd: manage host start magnitude + Cmd: manage host start runner Ping: Delay: 50 Until: 6000 diff --git a/magnitude/doc/FLOWSCRIPT.md b/runner/doc/FLOWSCRIPT.md similarity index 100% rename from magnitude/doc/FLOWSCRIPT.md rename to runner/doc/FLOWSCRIPT.md diff --git a/magnitude/doc/README.md b/runner/doc/README.md similarity index 100% rename from magnitude/doc/README.md rename to runner/doc/README.md diff --git a/magnitude/doc/Specifics.md b/runner/doc/Specifics.md similarity index 100% rename from magnitude/doc/Specifics.md rename to runner/doc/Specifics.md diff --git a/magnitude/doc/TASK.md b/runner/doc/TASK.md similarity index 100% rename from magnitude/doc/TASK.md rename to runner/doc/TASK.md diff --git a/magnitude/doc/UserDocs.md b/runner/doc/UserDocs.md similarity index 100% rename from magnitude/doc/UserDocs.md rename to runner/doc/UserDocs.md diff --git a/magnitude/doc/modelfles/Modelfile_UI_TARS b/runner/doc/modelfles/Modelfile_UI_TARS similarity index 100% rename from magnitude/doc/modelfles/Modelfile_UI_TARS rename to runner/doc/modelfles/Modelfile_UI_TARS diff --git a/magnitude/doc/modelfles/Modelfile_qwen b/runner/doc/modelfles/Modelfile_qwen similarity index 100% rename from magnitude/doc/modelfles/Modelfile_qwen rename to runner/doc/modelfles/Modelfile_qwen diff --git a/magnitude/doc/testing.txt b/runner/doc/testing.txt similarity index 100% rename from magnitude/doc/testing.txt rename to runner/doc/testing.txt diff --git a/magnitude/go.mod b/runner/go.mod similarity index 98% rename from magnitude/go.mod rename to runner/go.mod index ede1e7570e311862f0ca238752d9a94b915e311b..62904831df45a04a754833bb5e01aa3995797d09 100644 --- a/magnitude/go.mod +++ b/runner/go.mod @@ -1,4 +1,4 @@ -module gitlab.com/ginfra/wwwherd/magnitude +module gitlab.com/ginfra/wwwherd/runner go 1.24.0 diff --git a/magnitude/go.sum b/runner/go.sum similarity index 100% rename from magnitude/go.sum rename to runner/go.sum diff --git a/magnitude/local/README.md b/runner/local/README.md similarity index 100% rename from magnitude/local/README.md rename to runner/local/README.md diff --git a/magnitude/local/cmd.go b/runner/local/cmd.go similarity index 100% rename from magnitude/local/cmd.go rename to runner/local/cmd.go diff --git a/magnitude/local/config.go b/runner/local/config.go similarity index 100% rename from magnitude/local/config.go rename to runner/local/config.go diff --git a/magnitude/local/tools.go b/runner/local/tools.go similarity index 96% rename from magnitude/local/tools.go rename to runner/local/tools.go index 21c93d3c4356ec6c92e36e14a73b5f3e1ea487d5..40f3496721e6e1f3b51440e8402dd9023d3b195b 100644 --- a/magnitude/local/tools.go +++ b/runner/local/tools.go @@ -14,7 +14,7 @@ import ( "github.com/labstack/echo/v4" "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/ginfra/common/config" - "gitlab.com/ginfra/wwwherd/magnitude/api/service/models" // If you use multiple version, this will have to be changed. + "gitlab.com/ginfra/wwwherd/runner/api/service/models" // If you use multiple version, this will have to be changed. "go.uber.org/zap" "net/http" ) diff --git a/magnitude/local/values.go b/runner/local/values.go similarity index 87% rename from magnitude/local/values.go rename to runner/local/values.go index c27d5aefe6526a09f929d15fac8dc6faaae0778d..056b75859b1f36017d76e2b2ddc00f358bbc39b7 100644 --- a/magnitude/local/values.go +++ b/runner/local/values.go @@ -10,6 +10,7 @@ package local import ( "go.uber.org/zap" + "time" // # YOUR IMPORTS START HERE // >A############################################################################################################### // >A############################################################################################################### @@ -19,7 +20,7 @@ import ( // ##################################################################################################################### // # STATIC -const ServiceName = "magnitude" +const ServiceName = "runner" const TestAuth = "XXXXXXXXXXXX" // ##################################################################################################################### @@ -34,9 +35,17 @@ var ( const LLMKey = "LLM_API_KEY" const MoondreamKey = "MOONDREAM_API_KEY" -const WorkingSubdir = "magnitude" +const WorkingSubdir = "runner" const TestsSubdir = "tests" const StreamClientTargetConfig = "stream_client_target" const DispositionFile = "disposition.json" + +const MessageServerCheckTimeout = time.Millisecond * 750 + +const TokenLength = 20 + +var ( + MessageServer string +) diff --git a/magnitude/log/README.md b/runner/log/README.md similarity index 100% rename from magnitude/log/README.md rename to runner/log/README.md diff --git a/magnitude/main.go b/runner/main.go similarity index 92% rename from magnitude/main.go rename to runner/main.go index f3b69a3560b7a49ef3266afcb2422c68093f33f2..9a6144372df528a926c4d33778ca8f614a233663 100644 --- a/magnitude/main.go +++ b/runner/main.go @@ -11,8 +11,8 @@ package main import ( "fmt" - "gitlab.com/ginfra/wwwherd/magnitude/local" - "gitlab.com/ginfra/wwwherd/magnitude/service" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/service" "go.uber.org/zap" "net/http" "os" diff --git a/magnitude/package-lock.json b/runner/package-lock.json similarity index 99% rename from magnitude/package-lock.json rename to runner/package-lock.json index 2d1a313bb6aa0c5949bcc0fee1ed15870f3068ae..0127eadec3cc7023411a39d94d5383ba960656f2 100644 --- a/magnitude/package-lock.json +++ b/runner/package-lock.json @@ -1,11 +1,11 @@ { - "name": "service_magnitude", + "name": "runner", "version": "1.0.0", "lockfileVersion": 3, "requires": true, "packages": { "": { - "name": "service_magnitude", + "name": "runner", "version": "1.0.0", "license": "Apache-2.0", "dependencies": { diff --git a/magnitude/package.json b/runner/package.json similarity index 82% rename from magnitude/package.json rename to runner/package.json index 7eb962e84673e29eb4751878684695264542f73c..08420091728a65c2f65c27d233b38646bb113e4d 100644 --- a/magnitude/package.json +++ b/runner/package.json @@ -15,9 +15,10 @@ "pkgroll": "^2.10.0", "tsx": "^4.20.3" }, - "name": "service_magnitude", + "name": "runner", "version": "1.0.0", "main": "src/index.ts", + "type": "module", "scripts": { "start": "tsx src/index.ts", "dev": "tsx --watch src/index.ts", @@ -25,8 +26,8 @@ "test": "magnitude-test", "test:watch": "magnitude-test --watch" }, - "keywords": ["magnitude", "agent", "wwwherd"], + "keywords": ["runner","magnitude", "agent", "wwwherd"], "author": "Ginfra Project", "license": "Apache-2.0", - "description": "WWWHerd Magnitude Agent" + "description": "WWWHerd Runner Agent" } diff --git a/runner/providers/message/direct_connect.go b/runner/providers/message/direct_connect.go new file mode 100644 index 0000000000000000000000000000000000000000..295cd1a0fe1b87e32f9d2a8cf621a0fe6ea6e08c --- /dev/null +++ b/runner/providers/message/direct_connect.go @@ -0,0 +1,75 @@ +/* +Package message +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt + +Stream provider. This will be a direct call inject at this time. But as this goes cloud, it should be +replaced with Memphis.dev or something like that, while still maintaining this capability for testing. +*/ +package message + +import ( + "github.com/labstack/echo/v4" + "gitlab.com/ginfra/wwwherd/common/stream" + "gitlab.com/ginfra/wwwherd/runner/local" + "go.uber.org/zap" + "io" + "net/http" + "strconv" +) + +type DirectConnectProvider struct { + echoServer *echo.Echo + fn func(message *stream.RawMessage) (*stream.RawMessage, error) +} + +func NewDirectConnectProvider(echoServer *echo.Echo) MessageProvider { + var p DirectConnectProvider + p.echoServer = echoServer + return &p +} + +func (p *DirectConnectProvider) Start(f func(message *stream.RawMessage) (*stream.RawMessage, error)) error { + p.fn = f + + // Add the new POST route to the Echo server + p.echoServer.POST("/message", func(c echo.Context) error { + defer func() { + if r := recover(); r != nil { + local.Logger.Error("Panic recovered in Echo POST /message handler", zap.Any("panic", r)) + } + }() + + local.Logger.Debug("Echo message received", zap.String("method", c.Request().Method), zap.String("path", c.Request().URL.Path)) + + // Read the request body + body, err := io.ReadAll(c.Request().Body) + if err != nil { + local.Logger.Error("Failed to read request body in Echo handler", zap.Error(err)) + return c.JSON(http.StatusBadRequest, map[string]string{"error": "Bad request"}) + } + + smis := c.Request().Header.Get("Stream-Message-Id") + if smis == "" { + local.Logger.Error("Message did not have Stream-Message-Id ", zap.Error(err)) + return c.JSON(http.StatusBadRequest, map[string]string{"error": "Bad request"}) + } + midInt, err := strconv.Atoi(smis) + + streamMsg := stream.RawMessage{ + MType: stream.MessageType(midInt), + Data: body, + } + + m, err := p.fn(&streamMsg) + if err != nil { + local.Logger.Error("Failed to process message", zap.Error(err)) + return c.JSON(http.StatusBadRequest, map[string]string{"error": "Bad request"}) + } else if m != nil { + return c.JSON(http.StatusOK, m) + } + return c.JSON(http.StatusOK, "{}") + }) + + return nil +} diff --git a/runner/providers/message/provider.go b/runner/providers/message/provider.go new file mode 100644 index 0000000000000000000000000000000000000000..a3d31021ecb6c8d1224cc0f9ea06a75482895be6 --- /dev/null +++ b/runner/providers/message/provider.go @@ -0,0 +1,22 @@ +/* +Package message +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt + +Message provider. This will be a direct call inject at this time. But as this goes cloud, it should be +replaced with Memphis.dev or something like that, while still maintaining this capability for testing. +*/ +package message + +import ( + "github.com/labstack/echo/v4" + "gitlab.com/ginfra/wwwherd/common/stream" +) + +type MessageProvider interface { + Start(f func(message *stream.RawMessage) (*stream.RawMessage, error)) error +} + +func GetDirectConnectProvider(echoServer *echo.Echo) (MessageProvider, error) { + return NewDirectConnectProvider(echoServer), nil +} diff --git a/magnitude/providers/service_providers.go b/runner/providers/service_providers.go similarity index 81% rename from magnitude/providers/service_providers.go rename to runner/providers/service_providers.go index f20f7e0dd866684d980fb1a0ccb1dad699f4e995..02c0c5641981c96e4c011d3b7931a32ed91352e7 100644 --- a/magnitude/providers/service_providers.go +++ b/runner/providers/service_providers.go @@ -1,18 +1,18 @@ /* Package providers -Copyright (c) 2023 Erich Gatejen -[LICENSE] +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt Providers setup. - -[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt */ package providers import ( + "github.com/labstack/echo/v4" "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/wwwherd/common/stream" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/providers/message" "go.uber.org/zap" // # YOUR IMPORTS START HERE // >A############################################################################################################### @@ -26,13 +26,14 @@ type ProvidersLoaded struct { // >B############################################################################################################### Mc stream.StreamClient + Ms message.MessageProvider // >B############################################################################################################### // # ADD POINTERS (without * because they are interfaces) TO YOUR PROVIDERS END HERE } -func ProvidersLoad(gcontext *local.GContext) (*ProvidersLoaded, error) { +func ProvidersLoad(gcontext *local.GContext, echoServer *echo.Echo) (*ProvidersLoaded, error) { var ( l ProvidersLoaded err error @@ -41,6 +42,7 @@ func ProvidersLoad(gcontext *local.GContext) (*ProvidersLoaded, error) { // # ADD INSTANTIATING AND REMEMBERING YOUR PROVIDERS START HERE // >C############################################################################################################### + // Client var t string if t, err = gcontext.GetMySpecifics(local.StreamClientTargetConfig); err != nil { return nil, base.NewGinfraErrorChildA("Could not get stream client target specifics configuration", @@ -51,9 +53,12 @@ func ProvidersLoad(gcontext *local.GContext) (*ProvidersLoaded, error) { return nil, base.NewGinfraErrorChildA("Could not get stream client", err, base.LM_TARGET, t) } - local.Logger.Info("Stream client created.", zap.String(base.LM_TARGET, t)) + // Server + l.Ms = message.NewDirectConnectProvider(echoServer) + local.Logger.Info("Message server created.", zap.String(base.LM_TARGET, t)) + // >C############################################################################################################### // # ADD INSTANTIATING AND REMEMBERING YOUR PROVIDERS END HERE diff --git a/magnitude/service/common/prompt.go b/runner/service/common/prompt.go similarity index 100% rename from magnitude/service/common/prompt.go rename to runner/service/common/prompt.go diff --git a/magnitude/service/common/types.go b/runner/service/common/types.go similarity index 69% rename from magnitude/service/common/types.go rename to runner/service/common/types.go index b08eaf85bab739705af79d4bb75095966e892e93..147d4719bab064ab8cd618bb25abebd724ce0853 100644 --- a/magnitude/service/common/types.go +++ b/runner/service/common/types.go @@ -10,14 +10,29 @@ package common import ( "gitlab.com/ginfra/wwwherd/common/data" "gitlab.com/ginfra/wwwherd/common/script" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/local" "go.uber.org/zap" ) +type RunState int + +const ( + RunStateStart RunState = iota + RunStateOrderFlow + RunStateOrderCase + RunStateOrderDirective + RunStateDone +) + +const ( + DirectiveFlagDone = -1 +) + type RunSpec struct { Id string IdInt int - Uid string + Uid string // Usually the same as OrgId + UidInt int Name string Text string Values map[string]interface{} @@ -30,6 +45,9 @@ type RunSpec struct { ProvKey string ProvModel string + CurrentFlowIdx, CurrentCaseIdx, CurrentDirectiveIdx int + CurrentState RunState + // Translated and used to keep track of progress. Order data.Order diff --git a/runner/service/common/utils.go b/runner/service/common/utils.go new file mode 100644 index 0000000000000000000000000000000000000000..44d9e8983cb46e7956eb0e414646d5847797dcf0 --- /dev/null +++ b/runner/service/common/utils.go @@ -0,0 +1,94 @@ +/* +Package common +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt + +Common data. +*/ +package common + +import ( + "encoding/json" + "fmt" + "strconv" + "strings" +) + +func CleanLog(dirtext string, dirlog []interface{}) ([]interface{}, string, bool, string) { + filtered := make([]interface{}, 0) + cost := "" + errored := false + errtext := "" + for _, entry := range dirlog { + if e, ok := entry.(map[string]interface{}); ok { + if msg, ok := e["message"].(string); ok { + if !strings.HasPrefix(msg, dirtext) && !strings.Contains(msg, "[start] agent") { + filtered = append(filtered, entry) + } + + // Check if message starts with "COST:" and append to cost variable + if strings.HasPrefix(msg, "COST:") { + cost += msg[5:] + } + } + if t, ok := e["type"].(string); ok && t == "ERROR" { + // Case has errored + errored = true + // Handle e["message"] - can be string, json, or non-existent + var messageStr string + if msgValue, exists := e["message"]; exists { + switch v := msgValue.(type) { + case string: + messageStr = v + case map[string]interface{}, []interface{}: + // If it's JSON (object or array), marshal it back to string + if jsonBytes, err := json.Marshal(v); err == nil { + messageStr = string(jsonBytes) + } else { + messageStr = fmt.Sprintf("%v", v) // fallback + } + default: + // For any other type, convert to string + messageStr = fmt.Sprintf("%v", v) + } + } else { + // message doesn't exist + messageStr = "No error message provided." + } + errtext = messageStr + } + } + } + return filtered, errtext, errored, cost +} + +func TabulateCosts(data string) string { + + costMap := make(map[string]float64) + + // Split by whitespace to get tokens + tokens := strings.Fields(data) + + for _, token := range tokens { + // Split each token by colon to separate name from value + parts := strings.SplitN(token, ":", 2) + if len(parts) == 2 { + name := parts[0] + valueStr := parts[1] + + // Parse numeric value + if value, err := strconv.ParseFloat(valueStr, 64); err == nil { + costMap[name] += value + } + } + } + + // Build accumulated results as space-separated name:summed_values + var costResults []string + for name, sum := range costMap { + result := fmt.Sprintf("%s:%f", name, sum) + costResults = append(costResults, result) + } + + return strings.Join(costResults, " ") +} diff --git a/magnitude/service/private.go b/runner/service/private.go similarity index 96% rename from magnitude/service/private.go rename to runner/service/private.go index f091da46ec35f93453ffb972580ab74245131cf1..c69e60dfe0afb27c828b404a5ea4cb5364da7a78 100644 --- a/magnitude/service/private.go +++ b/runner/service/private.go @@ -26,8 +26,8 @@ import ( gecho "gitlab.com/ginfra/ginfra/common/service/echo" "gitlab.com/ginfra/ginfra/common/service/gotel" "gitlab.com/ginfra/ginfra/common/service/shared" - "gitlab.com/ginfra/wwwherd/magnitude/local" - "gitlab.com/ginfra/wwwherd/magnitude/providers" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/providers" "go.uber.org/zap" "net" "net/http" @@ -101,7 +101,7 @@ func ginfraHTTPErrorHandler(err error, c echo.Context) { } } -func Setup(c *local.GContext) *Servicemagnitude { +func Setup(c *local.GContext) *Servicerunner { var ( pl *providers.ProvidersLoaded @@ -129,24 +129,24 @@ func Setup(c *local.GContext) *Servicemagnitude { local.Logger = glogger.GetLogger() local.Logger.Info("Logging started.") + // Prime echo. + if echoServer != nil { + local.Logger.DPanic("Only one agent may run per app instance.") + } + echoServer = echo.New() + // Additional context setup. if err = c.ContextSetup(); err != nil { local.Logger.Panic(fmt.Sprintf("Failed to setup context: %v", err)) } // Providers - if pl, err = providers.ProvidersLoad(c); err != nil { + if pl, err = providers.ProvidersLoad(c, echoServer); err != nil { local.Logger.Panic(fmt.Sprintf("Failed to load providers: %v", err)) } - // Prime echo. - if echoServer != nil { - local.Logger.DPanic("Only one agent may run per app instance.") - } - echoServer = echo.New() - // Create the service context - service, errr := NewServicemagnitude(c, pl) + service, errr := NewServicerunner(c, pl) if errr != nil { local.Logger.Panic(errr.Error()) } @@ -210,7 +210,7 @@ func Setup(c *local.GContext) *Servicemagnitude { const StRetries = 60 const StInterval = 200 -func getServiceInstance(service *Servicemagnitude) *config.Config { +func getServiceInstance(service *Servicerunner) *config.Config { var ( c *config.Config @@ -236,7 +236,7 @@ func getServiceInstance(service *Servicemagnitude) *config.Config { return c } -func RunService(service *Servicemagnitude) error { +func RunService(service *Servicerunner) error { var ( err error = nil sw []*openapi3.T diff --git a/magnitude/service/service_control.go b/runner/service/service_control.go similarity index 87% rename from magnitude/service/service_control.go rename to runner/service/service_control.go index 8e1f3a674913b57cc16926cdfb553356e445aba2..e3ec9c859ecdd7206dbb8d924a991353fe72d1cc 100644 --- a/magnitude/service/service_control.go +++ b/runner/service/service_control.go @@ -12,8 +12,8 @@ import ( "context" "github.com/labstack/echo/v4" "gitlab.com/ginfra/ginfra/common/service/gotel" - "gitlab.com/ginfra/wwwherd/magnitude/api/service/models" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/api/service/models" + "gitlab.com/ginfra/wwwherd/runner/local" "go.opentelemetry.io/otel/trace" "go.uber.org/zap" "net/http" @@ -23,7 +23,7 @@ import ( // # YOUR IMPORTS END ) -func (gis *Servicemagnitude) GiControlDatasourceInit(ctx echo.Context, params models.GiControlDatasourceInitParams) error { +func (gis *Servicerunner) GiControlDatasourceInit(ctx echo.Context, params models.GiControlDatasourceInitParams) error { var span trace.Span o := gotel.GetOtelConfig() @@ -41,7 +41,7 @@ func (gis *Servicemagnitude) GiControlDatasourceInit(ctx echo.Context, params mo // # END DATA SOURCE INIT IMPLEMENTATION } -func (gis *Servicemagnitude) GiControlManageReset(ctx echo.Context, params models.GiControlManageResetParams) error { +func (gis *Servicerunner) GiControlManageReset(ctx echo.Context, params models.GiControlManageResetParams) error { var ( span trace.Span err error @@ -65,7 +65,7 @@ func (gis *Servicemagnitude) GiControlManageReset(ctx echo.Context, params model // >E############################################################################################################### msg := "Reset OK" - err = MagnitudeSetup(gis.Providers.Mc) + err = RunnerSetup(gis.Providers.Mc, gis.Providers.Ms) if err != nil { msg = "Reset error. " + err.Error() local.Logger.Error("Reset failed. Service is likely unavailable until system restart (including container restart).", @@ -83,7 +83,7 @@ func (gis *Servicemagnitude) GiControlManageReset(ctx echo.Context, params model return err } -func (gis *Servicemagnitude) GiControlManageStop(ctx echo.Context, params models.GiControlManageStopParams) error { +func (gis *Servicerunner) GiControlManageStop(ctx echo.Context, params models.GiControlManageStopParams) error { var ( span trace.Span err error @@ -133,7 +133,7 @@ func (gis *Servicemagnitude) GiControlManageStop(ctx echo.Context, params models // GiGetSpecifics Get specifics for this service // (GET /specifics) -func (gis *Servicemagnitude) GiGetSpecifics(ctx echo.Context) error { +func (gis *Servicerunner) GiGetSpecifics(ctx echo.Context) error { var span trace.Span o := gotel.GetOtelConfig() if o != nil { diff --git a/runner/service/service_dispatch.go b/runner/service/service_dispatch.go new file mode 100644 index 0000000000000000000000000000000000000000..c2402b9a3406654e812dd66be102c19bc0fbba86 --- /dev/null +++ b/runner/service/service_dispatch.go @@ -0,0 +1,99 @@ +/* +Package service +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt, if unaltered. You are free to alter and apply any license +you wish to this file. + +Work management. +*/ +package service + +import ( + "encoding/json" + "gitlab.com/ginfra/ginfra/base" + "gitlab.com/ginfra/wwwherd/common/data" + "gitlab.com/ginfra/wwwherd/common/script" + "gitlab.com/ginfra/wwwherd/common/stream" + "gitlab.com/ginfra/wwwherd/runner/local" + scommon "gitlab.com/ginfra/wwwherd/runner/service/common" + "go.uber.org/zap" + "strconv" +) + +// ##################################################################################################################### +// # FUNCTIONS + +func sendStatus(run *scommon.RunSpec, status data.RunStatus, d string) { + run.Status = status + if err := sclient.Send(stream.NewMsgRunStatus(run.IdInt, status, d)); err != nil { + local.Logger.Error(data.SMsgGeneralFailedSendStatus, zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } +} + +func shipOrders(run *scommon.RunSpec, final bool) { + orderText, err := json.Marshal(run.Order) + if err != nil { + local.Logger.Error("Failed to marshal orders.", zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } else if err = sclient.Send(stream.NewMsgRunOrders(run.IdInt, string(orderText), final)); err != nil { + local.Logger.Error(data.SMsgGeneralFailedSendOrders, zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } +} + +func getRunner(token, server, target, path, provider, model, key, url string) (*script.Runner, error) { + return script.NewRunner(token, server, target, path, provider, model, key, url) +} + +// ##################################################################################################################### +// # DISPATCH + +var ( + sclient stream.StreamClient + tokens = make(map[string]string) +) + +func SetupDispatch(c stream.StreamClient) { + sclient = c +} + +func Dispatch(run *scommon.RunSpec) error { + var ( + err error + ) + + // -- COMPILE ------------------------------------------------------------------------------------------------- + // TODO earlier version of this service locked the various run stores, but I'm thinking we dont really need to do that. + numerr := compileOrders(run) + if numerr > 0 { + run.Status = data.RunStatusError + run.Err = "Errors while compiling. Number of errors: " + strconv.Itoa(numerr) + "." + local.Logger.Error("Compile errors", zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid), zap.Int(base.LM_NUMBER, numerr)) + sendStatus(run, run.Status, run.Err) + moveFromRunningToHistory(run) + shipOrders(run, false) + return nil + } + + // -- EXECUTION ------------------------------------------------------------------------------------------------- + run.Status = data.RunStatusRunning + sendStatus(run, data.RunStatusRunning, data.SMsgRunnerRunning) + local.Logger.Info(data.SMsgRunnerRunning, zap.String(base.LM_ID, run.Id), zap.String(base.LM_UID, run.Uid)) + + // Yeah, I know, but this is a reminder. + run.CurrentFlowIdx = 0 + run.CurrentCaseIdx = 0 + run.CurrentState = scommon.RunStateStart + + token := base.GeneratePasswordClean(local.TokenLength) + tokens[token] = getKey(run.Uid, run.Id) + + if run.Runner, err = getRunner(token, local.MessageServer, run.Host, run.Path, run.Prov, run.ProvModel, run.ProvKey, + run.ProvUrl); err != nil { + sendStatus(run, data.RunStatusError, "AI backend error.") + } + + return err +} diff --git a/magnitude/service/service_feature.go b/runner/service/service_feature.go similarity index 74% rename from magnitude/service/service_feature.go rename to runner/service/service_feature.go index 27239df70072de7ee81675c01395862ed38174ca..1b92e7f3110d20ee912e22521984f8f451dcea2c 100644 --- a/magnitude/service/service_feature.go +++ b/runner/service/service_feature.go @@ -43,8 +43,8 @@ import ( "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/wwwherd/common/data" "gitlab.com/ginfra/wwwherd/common/stream" - "gitlab.com/ginfra/wwwherd/magnitude/local" - "gitlab.com/ginfra/wwwherd/magnitude/service/common" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/service/common" "go.uber.org/zap" "os" "runtime" @@ -63,7 +63,7 @@ var ( // ##################################################################################################################### // # FEATURES -func dataStatusMagnitude(rs *common.RunSpec) (string, string, string, string, data.RunStatus, error) { +func dataStatusRunner(rs *common.RunSpec) (string, string, string, string, data.RunStatus, error) { if rs == nil { return "", "", "", "", data.RunStatusUnknown, base.NewGinfraError("RunSpec is nil.") } @@ -73,7 +73,7 @@ func dataStatusMagnitude(rs *common.RunSpec) (string, string, string, string, da return rs.Id, rs.Uid, rs.Name, "OK", rs.Status, nil } -func MagnitudeRun(id int, values map[string]interface{}, uid, name, target, provId, provUrl, provKey, provModel, d string) error { +func RunnerRun(id int, values map[string]interface{}, uid, name, target, provId, provUrl, provKey, provModel, d string) error { // TODO the locks might not be needed any more. if !MagLock.Lock(time.Minute) { return base.NewGinfraError("Run failed due to timeout.") @@ -85,10 +85,16 @@ func MagnitudeRun(id int, values map[string]interface{}, uid, name, target, prov return base.NewGinfraErrorChild("Failed to decrypt key.", err) } + uidi, err := strconv.Atoi(uid) + if err != nil { + return base.NewGinfraErrorChild("Bad UID. Must be a numeric", err) + } + spec := common.RunSpec{ Id: strconv.Itoa(id), IdInt: id, Uid: uid, + UidInt: uidi, Name: name, Text: d, Values: values, @@ -100,16 +106,16 @@ func MagnitudeRun(id int, values map[string]interface{}, uid, name, target, prov ProvKey: key, ProvModel: provModel, } - runQueuePut(uid, &spec) + runQueuePut(&spec) - local.Logger.Info("Magnitude test queued.", zap.String(base.LM_ID, spec.Id), zap.String(base.LM_UID, spec.Uid)) + local.Logger.Info("Runner workflow queued.", zap.String(base.LM_ID, spec.Id), zap.String(base.LM_UID, spec.Uid)) return nil } -// MagnitudeStop Stops the test for the given id. +// RunnerStop Stops the test for the given id. // If the id doesn't exist, it will just return an error. -func MagnitudeStop(gis *Servicemagnitude, uid, id string) error { +func RunnerStop(gis *Servicerunner, uid, id string) error { if !MagLock.Lock(time.Minute) { return base.NewGinfraError("Status failed due to timeout.") } @@ -117,13 +123,13 @@ func MagnitudeStop(gis *Servicemagnitude, uid, id string) error { uidInt, err := strconv.Atoi(uid) if err != nil { - local.Logger.Error("BUG: Invalid uid format passed to MagnitudeStop", zap.String("uid", uid), zap.Error(err)) + local.Logger.Error("BUG: Invalid uid format passed to RunnerStop", zap.String("uid", uid), zap.Error(err)) return base.NewGinfraError("Invalid id format") } idInt, err := strconv.Atoi(id) if err != nil { - local.Logger.Error("Invalid id format passed to MagnitudeStop", zap.String("id", id), zap.Error(err)) + local.Logger.Error("Invalid id format passed to RunnerStop", zap.String("id", id), zap.Error(err)) return base.NewGinfraError("Invalid id format") } @@ -135,18 +141,9 @@ func MagnitudeStop(gis *Servicemagnitude, uid, id string) error { switch spec.Status { case data.RunStatusWaiting: // Remove from queue - if queue, exists := runQueue.Get(spec.Uid); exists { - for i, queuedSpec := range queue { - if queuedSpec.Id == spec.Id { - runQueue.Set(spec.Uid, append(queue[:i], queue[i+1:]...)) - goto waiting2Stop - } - } - } - panic("BUG: runQueue should always exist for this user and ID since findRunSpec had no error.") - waiting2Stop: + runQueueRemove(spec.Uid, spec.Id) spec.Status = data.RunStatusCancelled - historyPut(uid, id, spec) + historyPut(spec) case data.RunStatusRunning: // Call kill running process even if it is just in the channel. @@ -182,9 +179,9 @@ func MagnitudeStop(gis *Servicemagnitude, uid, id string) error { return err } -// MagnitudeStatus returns the Id, Uid, Name, Status (text narrative or error) and Status token for the run with the given id. +// RunnerStatus returns the Id, Uid, Name, Status (text narrative or error) and Status token for the run with the given id. // If the id doesn't exist, it will just return an error. -func MagnitudeStatus(uid, id string) (string, string, string, string, data.RunStatus, error) { +func RunnerStatus(uid, id string) (string, string, string, string, data.RunStatus, error) { if !MagLock.Lock(time.Minute) { return "", "", "", "", data.RunStatusUnknown, base.NewGinfraError("Status failed due to timeout.") } @@ -193,11 +190,11 @@ func MagnitudeStatus(uid, id string) (string, string, string, string, data.RunSt if err != nil { return "", "", "", "", data.RunStatusUnknown, err } - return dataStatusMagnitude(spec) + return dataStatusRunner(spec) } -// MagnitudeLog get the log -func MagnitudeLog(uid string, id string) (string, error) { +// RunnerLog get the log +func RunnerLog(uid string, id string) (string, error) { if !MagLock.Lock(time.Minute) { return "", base.NewGinfraError("Status failed due to timeout.") } @@ -225,8 +222,8 @@ func MagnitudeLog(uid string, id string) (string, error) { return string(fd), nil } -// MagnitudeRemove remove completed tests. -func MagnitudeRemove(uid string, ids *[]string) error { +// RunnerRemove remove completed tests. +func RunnerRemove(uid string, ids *[]string) error { if ids == nil || len(*ids) < 1 { return base.NewGinfraError("No ids provided.") } @@ -260,8 +257,8 @@ func MagnitudeRemove(uid string, ids *[]string) error { } func runloop(client stream.StreamClient) { - initWorkerPool(client) - for { + + for loopRunning == true { func() { defer func() { @@ -278,15 +275,8 @@ func runloop(client stream.StreamClient) { } }() - if runQueue.Count() == 0 { + if runQueueLen() == 0 { time.Sleep(time.Second) - if loopRunning == false { - close(runChannel) - for i := 0; i < workerPoolSize; i++ { - <-doneChannel - } - return - } return } @@ -294,27 +284,33 @@ func runloop(client stream.StreamClient) { return } - for uid, rs := range runQueue.Items() { - if runningNumberRunning(uid) < workerPoolSize { - if len(rs) > 0 { - err := os.MkdirAll(rs[0].Path, 0777) + if runningLen() < runningPoolSize { + rs := runQueueDequeue() + if rs != nil { + err := os.MkdirAll(rs.Path, 0777) + if err == nil { + rs.Status = data.RunStatusRunning + runningPut(rs) + err = Dispatch(rs) if err == nil { - rs[0].Status = data.RunStatusRunning - runningPut(uid, rs[0].Id, rs[0]) - runChannel <- rs[0] - local.Logger.Info("Magnitude test dispatched.", zap.String(base.LM_ID, rs[0].Id), - zap.String(base.LM_UID, uid)) + local.Logger.Info("Runner workflow dispatched.", zap.String(base.LM_ID, rs.Id), + zap.String(base.LM_UID, rs.Uid)) } else { - rs[0].Status = data.RunStatusError - rs[0].Err = "Failed to create run directory." - runningPut(uid, rs[0].Id, rs[0]) - local.Logger.Error("Failed to create run directory.", zap.String(base.LM_ID, rs[0].Id), - zap.String(base.LM_UID, uid), zap.Error(err)) + local.Logger.Info("Runner workflow dispatch failed.", zap.String(base.LM_ID, rs.Id), + zap.String(base.LM_UID, rs.Uid), zap.Error(err)) } - runQueue.Set(uid, rs[1:]) + + } else { + rs.Status = data.RunStatusError + rs.Err = "Failed to create run directory." + historyPut(rs) + // TODO notify dispatch?' + local.Logger.Error("Failed to create run directory.", zap.String(base.LM_ID, rs.Id), + zap.String(base.LM_UID, rs.Uid), zap.Error(err)) } } } + MagLock.Unlock() }() } diff --git a/magnitude/service/service_impl.go b/runner/service/service_impl.go similarity index 81% rename from magnitude/service/service_impl.go rename to runner/service/service_impl.go index 4e8dab998aa0e113c83f0686c368c37a28a11c46..88b45a4e3f03b96d2122b7d7e943c434e2685c53 100644 --- a/magnitude/service/service_impl.go +++ b/runner/service/service_impl.go @@ -14,9 +14,9 @@ import ( "github.com/labstack/echo/v4" "gitlab.com/ginfra/ginfra/common/config" "gitlab.com/ginfra/ginfra/common/service/shared" - sv "gitlab.com/ginfra/wwwherd/magnitude/api/service/server" - "gitlab.com/ginfra/wwwherd/magnitude/local" - "gitlab.com/ginfra/wwwherd/magnitude/providers" + sv "gitlab.com/ginfra/wwwherd/runner/api/service/server" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/providers" "go.uber.org/zap" "os" "path/filepath" @@ -27,8 +27,8 @@ import ( "bytes" "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/wwwherd/magnitude/api/service/models" - scommon "gitlab.com/ginfra/wwwherd/magnitude/service/common" + "gitlab.com/ginfra/wwwherd/runner/api/service/models" + scommon "gitlab.com/ginfra/wwwherd/runner/service/common" "go.opentelemetry.io/otel/attribute" "io" "net/http" @@ -38,7 +38,7 @@ import ( // # YOUR IMPORTS END ) -type Servicemagnitude struct { +type Servicerunner struct { Providers *providers.ProvidersLoaded ServiceContext *local.GContext ServiceConfig *config.Config @@ -50,8 +50,8 @@ type Servicemagnitude struct { // # ADD ADDITIONAL SERVICE DATA END HERE } -func NewServicemagnitude(gcontext *local.GContext, providers *providers.ProvidersLoaded) (*Servicemagnitude, error) { - return &Servicemagnitude{ +func NewServicerunner(gcontext *local.GContext, providers *providers.ProvidersLoaded) (*Servicerunner, error) { + return &Servicerunner{ Providers: providers, ServiceContext: gcontext, }, nil @@ -69,7 +69,7 @@ func doBundleFile(path string, cfg *config.Config, item string) { } } -func ServiceSetup(service *Servicemagnitude) error { +func ServiceSetup(service *Servicerunner) error { var err error @@ -80,7 +80,9 @@ func ServiceSetup(service *Servicemagnitude) error { // # ADD ADDITIONAL SETUP CODE START HERE. All configuration is complete and the providers are ready. // >D############################################################################################################### - serr := MagnitudeSetup(service.Providers.Mc) + local.MessageServer = "http://localhost:" + service.ServiceConfig.GetValue("port").GetValueStringNoerr() + "/message" + + serr := RunnerSetup(service.Providers.Mc, service.Providers.Ms) if serr != nil { local.Logger.Error("Setup failed. Service unavailable until corrected and reset", zap.Error(serr)) } @@ -93,14 +95,14 @@ func ServiceSetup(service *Servicemagnitude) error { var specifics string -func getServiceSpecifics(gis *Servicemagnitude) string { +func getServiceSpecifics(gis *Servicerunner) string { if specifics == "" { specifics = shared.LoadSpecifics(gis.ServiceContext.Scfg.ServiceHome) } return specifics } -func RegisterHandlers(echoServer *echo.Echo, service *Servicemagnitude) ([]*openapi3.T, error) { +func RegisterHandlers(echoServer *echo.Echo, service *Servicerunner) ([]*openapi3.T, error) { // Always register the main service. sw, err := sv.GetSwagger() @@ -148,7 +150,7 @@ func RegisterHandlers(echoServer *echo.Echo, service *Servicemagnitude) ([]*open // # ADDITIONAL CODE GOES BELOW HERE // # Stubs can go here or in other files. -func (gis *Servicemagnitude) triageEntry(ctx echo.Context, auth string) (bool, error) { +func (gis *Servicerunner) triageEntry(ctx echo.Context, auth string) (bool, error) { if !setupComplete { local.Logger.Error("Service not setup. Correct and reset.") return true, local.PostErrorResponse(ctx, http.StatusInternalServerError, base.NewGinfraError("Service not setup. Correct and reset.")) @@ -163,7 +165,7 @@ func (gis *Servicemagnitude) triageEntry(ctx echo.Context, auth string) (bool, e // GiTestRun Run a test. // (POST /gitest/run) -func (gis *Servicemagnitude) GiTestRun(ctx echo.Context, params models.GiTestRunParams) (err error) { +func (gis *Servicerunner) GiTestRun(ctx echo.Context, params models.GiTestRunParams) (err error) { span := SpanStart("GiTestRun", []base.NV{{"target", params.Target}, {"uid", params.Uid}, {"name", params.Name}}) @@ -197,7 +199,7 @@ func (gis *Servicemagnitude) GiTestRun(ctx echo.Context, params models.GiTestRun } } - err = MagnitudeRun(params.Id, values, params.Uid, params.Name, params.Target, params.Provider, base.Ptr2StringOrEmpty(params.ProviderUrl), + err = RunnerRun(params.Id, values, params.Uid, params.Name, params.Target, params.Provider, base.Ptr2StringOrEmpty(params.ProviderUrl), base.Ptr2StringOrEmpty(params.ProviderToken), base.Ptr2StringOrEmpty(params.ProviderModel), pstring) if err != nil { return TriageError(ctx, err) @@ -211,7 +213,7 @@ func (gis *Servicemagnitude) GiTestRun(ctx echo.Context, params models.GiTestRun // GiTestStop Stop a test. // (GET /gitest/stop) -func (gis *Servicemagnitude) GiTestStop(ctx echo.Context, params models.GiTestStopParams) (err error) { +func (gis *Servicerunner) GiTestStop(ctx echo.Context, params models.GiTestStopParams) (err error) { span := SpanStart("GiTestStop", []base.NV{{"id", params.Id}, {"uid", params.Uid}}) defer func() { SpanEnd(span, err) @@ -221,7 +223,7 @@ func (gis *Servicemagnitude) GiTestStop(ctx echo.Context, params models.GiTestSt return err } - err = MagnitudeStop(gis, params.Uid, params.Id) + err = RunnerStop(gis, params.Uid, params.Id) if err != nil { return TriageError(ctx, err) } @@ -231,7 +233,7 @@ func (gis *Servicemagnitude) GiTestStop(ctx echo.Context, params models.GiTestSt // GiTestStatusTest Get status for a test. // (GET /test/gitest/status/test) -func (gis *Servicemagnitude) GiTestStatusTest(ctx echo.Context, params models.GiTestStatusTestParams) (err error) { +func (gis *Servicerunner) GiTestStatusTest(ctx echo.Context, params models.GiTestStatusTestParams) (err error) { span := SpanStart("GiTestStatusTest", []base.NV{{"id", params.Id}, {"uid", params.Uid}}) defer func() { @@ -242,7 +244,7 @@ func (gis *Servicemagnitude) GiTestStatusTest(ctx echo.Context, params models.Gi return err } - id, _, name, status, state, err := MagnitudeStatus(params.Uid, params.Id) + id, _, name, status, state, err := RunnerStatus(params.Uid, params.Id) if err != nil { return TriageError(ctx, err) } @@ -264,7 +266,7 @@ func (gis *Servicemagnitude) GiTestStatusTest(ctx echo.Context, params models.Gi func gatherStatusSub(runs []*scommon.RunSpec) *[]models.TestStatusItem { rd := make([]models.TestStatusItem, 0) for _, item := range runs { - id, _, name, text, state, _ := dataStatusMagnitude(item) + id, _, name, text, state, _ := dataStatusRunner(item) ri := models.TestStatusItem{ Id: &id, Name: &name, @@ -281,7 +283,7 @@ func gatherStatusSub(runs []*scommon.RunSpec) *[]models.TestStatusItem { return &rd } -func (gis *Servicemagnitude) gatherGiTestStatusAll(uid string) (models.TestStatusInfo, error) { +func (gis *Servicerunner) gatherGiTestStatusAll(uid string) (models.TestStatusInfo, error) { var r models.TestStatusInfo if !MagLock.Lock(time.Minute) { return r, base.NewGinfraError("Status failed due to timeout.") @@ -297,7 +299,7 @@ func (gis *Servicemagnitude) gatherGiTestStatusAll(uid string) (models.TestStatu // GiTestStatusAll Get status for all tests. // (GET /test/gitest/status/all) -func (gis *Servicemagnitude) GiTestStatusAll(ctx echo.Context, params models.GiTestStatusAllParams) (err error) { +func (gis *Servicerunner) GiTestStatusAll(ctx echo.Context, params models.GiTestStatusAllParams) (err error) { span := SpanStart("GiTestStatusAll", []base.NV{{"uid", params.Uid}}) defer func() { @@ -318,7 +320,7 @@ func (gis *Servicemagnitude) GiTestStatusAll(ctx echo.Context, params models.GiT // GiTestLog Get log for a test. // (GET /test/gitest/log) -func (gis *Servicemagnitude) GiTestLog(ctx echo.Context, params models.GiTestLogParams) (err error) { +func (gis *Servicerunner) GiTestLog(ctx echo.Context, params models.GiTestLogParams) (err error) { span := SpanStart("GiTestStatusAll", []base.NV{{"id", params.Id}, {"uid", params.Uid}}) defer func() { @@ -330,7 +332,7 @@ func (gis *Servicemagnitude) GiTestLog(ctx echo.Context, params models.GiTestLog } var log string - log, err = MagnitudeLog(params.Uid, params.Id) + log, err = RunnerLog(params.Uid, params.Id) if err != nil { return TriageError(ctx, err) } @@ -340,7 +342,7 @@ func (gis *Servicemagnitude) GiTestLog(ctx echo.Context, params models.GiTestLog // GiTestRemove Remove a list of completed tests. // (POST /gitest/remove) -func (gis *Servicemagnitude) GiTestRemove(ctx echo.Context, params models.GiTestRemoveParams) (err error) { +func (gis *Servicerunner) GiTestRemove(ctx echo.Context, params models.GiTestRemoveParams) (err error) { span := SpanStart("GiTestRemove", []base.NV{{"uid", params.Uid}}) defer func() { @@ -358,7 +360,7 @@ func (gis *Servicemagnitude) GiTestRemove(ctx echo.Context, params models.GiTest return local.PostErrorResponse(ctx, http.StatusInternalServerError, err) } - err = MagnitudeRemove(params.Uid, req.Remove) + err = RunnerRemove(params.Uid, req.Remove) if err != nil { return TriageError(ctx, err) } diff --git a/runner/service/service_message.go b/runner/service/service_message.go new file mode 100644 index 0000000000000000000000000000000000000000..7ff1a44248395432c88a8cd503a3661e5b21f134 --- /dev/null +++ b/runner/service/service_message.go @@ -0,0 +1,611 @@ +/* +Package service +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.tx +*/ +package service + +import ( + "encoding/json" + "fmt" + "gitlab.com/ginfra/ginfra/base" + "gitlab.com/ginfra/ginfra/common" + "gitlab.com/ginfra/wwwherd/common/data" + "gitlab.com/ginfra/wwwherd/common/script" + fscript "gitlab.com/ginfra/wwwherd/common/script/flow" + rscript "gitlab.com/ginfra/wwwherd/common/script/raw" + "gitlab.com/ginfra/wwwherd/common/stream" + "gitlab.com/ginfra/wwwherd/runner/local" + scommon "gitlab.com/ginfra/wwwherd/runner/service/common" + "go.uber.org/zap" + "runtime" + "strconv" + "strings" + "time" +) + +// ##################################################################################################################### +// # DATA + +var ( + latestInputCost float64 = 0.0 + latestOutputCost float64 = 0.0 + latestInputTokens = 0 + latestOutputTokens = 0 + latestCacheWriteInputTokens = 0 + latestCacheReadInputTokens = 0 + + startCaseTime time.Time + startFlowTime time.Time +) + +func clearCosts() { + latestInputCost = 0.0 + latestOutputCost = 0.0 + latestInputTokens = 0 + latestOutputTokens = 0 + latestCacheWriteInputTokens = 0 + latestCacheReadInputTokens = 0 +} + +func parseCosts(costString string) error { + // Split the string by spaces to get individual cost components + parts := strings.Split(costString, " ") + + for _, part := range parts { + // Split each part by colon to get key-value pairs + keyValue := strings.Split(part, ":") + if len(keyValue) != 2 { + continue + } + + key := keyValue[0] + value := keyValue[1] + + switch key { + case "inputCost": + if val, err := strconv.ParseFloat(value, 64); err == nil { + latestInputCost += val + } else { + return fmt.Errorf("failed to parse inputCost: %v", err) + } + case "outputCost": + if val, err := strconv.ParseFloat(value, 64); err == nil { + latestOutputCost += val + } else { + return fmt.Errorf("failed to parse outputCost: %v", err) + } + case "inputTokens": + if val, err := strconv.Atoi(value); err == nil { + latestInputTokens += val + } else { + return fmt.Errorf("failed to parse inputTokens: %v", err) + } + case "outputTokens": + if val, err := strconv.Atoi(value); err == nil { + latestOutputTokens += val + } else { + return fmt.Errorf("failed to parse outputTokens: %v", err) + } + case "cacheWriteInputTokens": + if val, err := strconv.Atoi(value); err == nil { + latestCacheWriteInputTokens += val + } else { + return fmt.Errorf("failed to parse cacheWriteInputTokens: %v", err) + } + case "cacheReadInputTokens": + if val, err := strconv.Atoi(value); err == nil { + latestCacheReadInputTokens += val + } else { + return fmt.Errorf("failed to parse cacheReadInputTokens: %v", err) + } + } + } + + return nil +} + +func formatCosts() string { + return fmt.Sprintf("inputCost:%g outputCost:%g inputTokens:%d outputTokens:%d cacheWriteInputTokens:%d cacheReadInputTokens:%d", + latestInputCost, + latestOutputCost, + latestInputTokens, + latestOutputTokens, + latestCacheWriteInputTokens, + latestCacheReadInputTokens) +} + +func deriveFlowResult(run *scommon.RunSpec) data.RunStatus { + for _, c := range run.Order.Flows[run.CurrentFlowIdx].Cases { + if c.Result == data.RunStatusError { + return data.RunStatusError + } + if c.Result == data.RunStatusFailed { + return data.RunStatusFailed + } + } + return data.RunStatusPassed +} + +// ##################################################################################################################### +// # COMPILER + +func getCaseCompiler(thecase *data.OrderCaseItem) script.Compiler { + switch thecase.Type { + case data.CaseTypeFlowScript: + return &fscript.Compiler{} + case data.CaseTypeRaw: + return &rscript.Compiler{} + default: + panic("Unknown case type.") + } +} + +func compileOrders(spec *scommon.RunSpec) int { + if err := json.Unmarshal([]byte(spec.Text), &spec.Order); err != nil { + spec.Err = base.NewGinfraErrorA("Failed to unmarshal orders.", err).Error() + return 1 + } + + errnum := 0 + for flIdx := range spec.Order.Flows { + fl := &spec.Order.Flows[flIdx] + + // Range through all cases in the current fl + for caseIdx := range fl.Cases { + case_ := &fl.Cases[caseIdx] + + // Compile the case using caseCompileDispatch + compiler := getCaseCompiler(case_) + cc, errors := compiler.CompileCase(case_.Order) + + if errors != nil && len(errors) > 0 { + // If there are errors, put them into the Error field and set Result to RunStatusError + case_.Error = base.NewGinfraErrorA("Compile failed.", errors).Error() + case_.Result = data.RunStatusError + errnum++ + } else { + // If no error, put the compiled result into the Compiled field + case_.Compiled = cc + } + } + } + return errnum +} + +// ##################################################################################################################### +// # PROCESS + +func processLog(run *scommon.RunSpec, log string) { + jcontent := data.FormatAsJSON(log) + + // TODO Since it is going into postgres for now, it will be compressed there. These should never get too big, + // it would be ok to leave it in the db over the long run. But... then we can't properly index and scrap it + // for search an analytics. Eventually it needs to move somewhere else. + b64content := data.EncodeToBase64([]byte(jcontent)) + + if err := sclient.Send(stream.NewMsgRunLog(run.UidInt, run.IdInt, b64content)); err != nil { + local.Logger.Error(data.SMsgRunnerFailedSendRunLog, zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } +} + +// ##################################################################################################################### +// # FEATURES + +func lostProcess(uiid string) *stream.RawMessage { + local.Logger.Error(data.SMsgRunnerLostProcess, zap.String(base.LM_UID, uiid)) + return wrapRawMessage(stream.NewMsgFault("Ghosted")) +} + +func skipRemaining(run *scommon.RunSpec, current data.RunStatus) { + if run.CurrentFlowIdx < len(run.Order.Flows) && + run.CurrentCaseIdx < len(run.Order.Flows[run.CurrentFlowIdx].Cases) { + run.Order.Flows[run.CurrentFlowIdx].Result = current + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Result = current + } + + if run.CurrentFlowIdx < len(run.Order.Flows) { + for i := run.CurrentCaseIdx + 1; i < len(run.Order.Flows[run.CurrentFlowIdx].Cases); i++ { + run.Order.Flows[run.CurrentFlowIdx].Cases[i].Result = data.RunStatusSkipped + } + } + + // Mark all remaining flows and their cases as skipped + for i := run.CurrentFlowIdx + 1; i < len(run.Order.Flows); i++ { + run.Order.Flows[i].Result = data.RunStatusSkipped + for j := range run.Order.Flows[i].Cases { + run.Order.Flows[i].Cases[j].Result = data.RunStatusSkipped + } + } + +} + +func findPrompt(flow data.OrderFlowItem) string { + var r string + + if flow.Decorated.Strict { + r = scommon.PromptStrict + } + + lines := strings.Split(flow.Narrative, "\n") + var result []string + found := false + + for _, line := range lines { + if strings.Contains(line, "##PROMPT") { + if found { + break + } + found = true + continue + } + if found { + result = append(result, line) + } + } + + if len(result) > 0 { + r = r + strings.Join(result, "\n") + } + return r +} + +func sendCurrentFlow(run *scommon.RunSpec, id int) { + if err := sclient.Send(stream.NewMsgRunCurrentFlow(run.IdInt, id)); err != nil { + local.Logger.Error(data.SMsgRunnerFailedSendCurrentFlow, zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } +} + +func sendCurrentCase(run *scommon.RunSpec, id int) { + if err := sclient.Send(stream.NewMsgRunCurrentCase(run.IdInt, id)); err != nil { + local.Logger.Error(data.SMsgRunnerFailedSendCurrentCase, zap.Error(err), zap.String(base.LM_ID, run.Id), + zap.String(base.LM_UID, run.Uid)) + } +} + +func wrapRawMessage(message stream.Message) *stream.RawMessage { + r := stream.RawMessage{ + MType: message.GetMType(), + Data: message, + } + return &r +} + +func doMsgDone(run *scommon.RunSpec) (*stream.RawMessage, error) { + // Catchall for flow. + if (run.CurrentFlowIdx < len(run.Order.Flows)) && (run.CurrentCaseIdx < len(run.Order.Flows[run.CurrentFlowIdx].Cases)) { + if run.Order.Flows[run.CurrentFlowIdx].Time == 0 { + run.Order.Flows[run.CurrentFlowIdx].Time = time.Since(startFlowTime).Milliseconds() + } + if run.Order.Flows[run.CurrentFlowIdx].Result <= data.RunStatusRunning { + run.Order.Flows[run.CurrentFlowIdx].Result = deriveFlowResult(run) + } + } + run.CurrentState = scommon.RunStateDone + run.Status = data.RunStatusDone + sendStatus(run, data.RunStatusDone, data.SMsgRunnerDonePendingResults) + return &stream.RawMessage{ + MType: stream.MsgTypeDone, + Data: stream.NewMsgDone(getKey(run.Uid, run.Id)), + }, nil +} + +func doMsgDoneError(run *scommon.RunSpec, msg string, status data.RunStatus) (*stream.RawMessage, error) { + run.CurrentState = scommon.RunStateDone + run.Status = status + if run.CurrentFlowIdx < len(run.Order.Flows) && run.CurrentCaseIdx < len(run.Order.Flows[run.CurrentFlowIdx].Cases) { + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Costs = formatCosts() // TODO I think this is ok here, but keep an eye out for odd costs when errors happen. + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Error = msg + logCase(run) + } + sendStatus(run, status, "Failing due to error.") + skipRemaining(run, status) + local.Logger.Error(data.SMsgRunnerTerminatingScriptError, zap.String(base.LM_CAUSE, msg)) + return &stream.RawMessage{ + MType: stream.MsgTypeDone, + Data: stream.NewMsgDone(getKey(run.Uid, run.Id)), + }, nil +} + +func doMsgFlow(run *scommon.RunSpec) (*stream.RawMessage, error) { + + // Find a flow with cases. + for ; run.CurrentFlowIdx < len(run.Order.Flows); run.CurrentFlowIdx++ { + if len(run.Order.Flows[run.CurrentFlowIdx].Cases) > 0 { + run.CurrentCaseIdx = 0 + run.CurrentState = scommon.RunStateOrderFlow + sendCurrentFlow(run, run.Order.Flows[run.CurrentFlowIdx].Id) + startFlowTime = time.Now() + local.Logger.Info(data.SMsgRunnerStartingFlow, zap.String(base.LM_UID, run.Uid), zap.String(base.LM_ID, run.Id), zap.Int("flow", run.CurrentFlowIdx)) + return &stream.RawMessage{ + MType: stream.MsgTypeOrderFlow, + Data: stream.NewMsgOrderFlow(getKey(run.Uid, run.Id), run.Order.Flows[run.CurrentFlowIdx].Name, + findPrompt(run.Order.Flows[run.CurrentFlowIdx]), run.Order.Flows[run.CurrentFlowIdx].Id), + }, nil + } else { + run.Order.Flows[run.CurrentFlowIdx].Result = data.RunStatusSkipped + } + } + + run.CurrentState = scommon.RunStateDone + sendStatus(run, data.RunStatusDone, data.SMsgRunnerDonePendingResults) + local.Logger.Info(data.SMsgRunnerDonePendingResults, zap.String(base.LM_UID, run.Uid), zap.String(base.LM_ID, run.Id), zap.Int("flow", run.CurrentFlowIdx), zap.Int("case", run.CurrentCaseIdx)) + sendCurrentFlow(run, 0) + sendCurrentCase(run, 0) + return &stream.RawMessage{ + MType: stream.MsgTypeDone, + Data: stream.NewMsgDone(getKey(run.Uid, run.Id)), + }, nil +} + +func nextFlow(run *scommon.RunSpec) (*stream.RawMessage, error) { + local.Logger.Info("Flow complete", + zap.Int(base.LM_RESULT, int(run.Order.Flows[run.CurrentFlowIdx].Result)), + zap.Int(base.LM_ID, run.Order.Flows[run.CurrentFlowIdx].Id), + zap.Int(base.LM_ID_RUN, run.IdInt), + zap.String(base.LM_UID, run.Uid), + ) + run.CurrentFlowIdx++ + if run.CurrentFlowIdx >= len(run.Order.Flows) { + run.CurrentFlowIdx = 0 + return doMsgDone(run) + } + return doMsgFlow(run) +} + +func doMsgCase(run *scommon.RunSpec) (*stream.RawMessage, error) { + // We have twice made sure this is pointing at an actual case. + clearCosts() + run.CurrentDirectiveIdx = 0 + run.CurrentState = scommon.RunStateOrderDirective + sendCurrentCase(run, run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Id) + startCaseTime = time.Now() + local.Logger.Info(data.SMsgRunnerStartingCase, zap.String(base.LM_UID, run.Uid), zap.String(base.LM_ID, run.Id), zap.Int("flow", run.CurrentFlowIdx), zap.Int("case", run.CurrentCaseIdx)) + return &stream.RawMessage{ + MType: stream.MsgTypeOrderCase, + Data: stream.NewMsgOrderCase(getKey(run.Uid, run.Id), run.Order.Flows[run.CurrentFlowIdx].Name, + run.Order.Flows[run.CurrentFlowIdx].Id), + }, nil +} + +func logCase(run *scommon.RunSpec) { + local.Logger.Info("Case complete", + zap.Int(base.LM_RESULT, int(run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Result)), + zap.Int(base.LM_ID, run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Id), + zap.Int(base.LM_ID_RUN, run.IdInt), + zap.Int("id.flow", run.Order.Flows[run.CurrentFlowIdx].Id), + zap.String(base.LM_UID, run.Uid), + zap.Float64(data.SMsgCostsInputCost, latestInputCost), + zap.Float64(data.SMsgCostsOutputCost, latestOutputCost), + zap.Int(data.SMsgCostsInputTokens, latestInputTokens), + zap.Int(data.SMsgCostsOutputTokens, latestOutputTokens), + zap.Int(data.SMsgCostsCacheWriteInputTokens, latestCacheWriteInputTokens), + zap.Int(data.SMsgCostsCacheReadInputTokens, latestCacheReadInputTokens), + zap.Duration(data.SMsgCostsCaseExecutionTime, time.Since(startCaseTime)), + ) +} + +func nextCase(run *scommon.RunSpec) (*stream.RawMessage, error) { + logCase(run) + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Costs = formatCosts() + shipOrders(run, false) + run.CurrentCaseIdx++ + if run.CurrentCaseIdx >= len(run.Order.Flows[run.CurrentFlowIdx].Cases) { + run.Order.Flows[run.CurrentFlowIdx].Result = deriveFlowResult(run) + run.Order.Flows[run.CurrentFlowIdx].Time = time.Since(startFlowTime).Milliseconds() + run.CurrentCaseIdx = 0 + return nextFlow(run) + } else { + return doMsgCase(run) + } +} + +func doMsgDirective(run *scommon.RunSpec) (*stream.RawMessage, error) { + var ( + text string + err error + dir int + ) + switch cc := run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Compiled.(type) { + case rscript.CompiledCase: + text = cc.Text + run.CurrentDirectiveIdx = scommon.DirectiveFlagDone + dir = 0 + case fscript.CompiledCase: + text = cc.Directives[run.CurrentDirectiveIdx].Text + if cc.Decorations.Strict { + text = scommon.PromptStrict + text + } + dir = int(cc.Directives[run.CurrentDirectiveIdx].Type) + run.CurrentDirectiveIdx++ + if run.CurrentDirectiveIdx >= len(cc.Directives) { + run.CurrentDirectiveIdx = scommon.DirectiveFlagDone + } + } + + if text, err = common.SimpleReplacer(text, run.Values, true); err != nil { + return doMsgDoneError(run, "Failed to replace values in directive due to error. msg="+err.Error(), data.RunStatusError) + } + + return &stream.RawMessage{ + MType: stream.MsgTypeOrderDirective, + Data: stream.NewMsgOrderDirective(getKey(run.Uid, run.Id), text, dir), + }, nil +} + +func msgReady(msg *stream.MsgReady) (*stream.RawMessage, error) { + local.Logger.Info("MsgReady") + + run, err := findRunSpecUiid(msg.Token) + if err != nil { + return lostProcess(msg.Token), nil + } + + switch run.CurrentState { + case scommon.RunStateStart: + if len(run.Order.Flows) <= run.CurrentFlowIdx { + return doMsgDone(run) + } else { + run.Status = data.RunStatusRunning + sendStatus(run, data.RunStatusRunning, "Running.") + local.Logger.Info("Starting run.", zap.String(base.LM_UID, run.Uid), zap.String(base.LM_ID, run.Id)) + run.CurrentState = scommon.RunStateOrderFlow + return doMsgFlow(run) + } + + case scommon.RunStateOrderFlow: + if len(run.Order.Flows[run.CurrentFlowIdx].Cases) <= run.CurrentCaseIdx { + local.Logger.Error("BUG: this should have never happened. Flow without any cases made it a RunStateOrderFlow response.") + return doMsgDone(run) + } else { + run.CurrentState = scommon.RunStateOrderCase + return doMsgCase(run) + } + + case scommon.RunStateOrderCase: + run.CurrentState = scommon.RunStateOrderDirective + return doMsgDirective(run) + + case scommon.RunStateOrderDirective: + return doMsgDirective(run) + } + + local.Logger.Error("BUG: this should have never happened. Unknown state for MsgReady.") + return doMsgDoneError(run, "Done, bug encountered in server. Administrators notified.", data.RunStatusError) +} + +func msgResult(msg *stream.MsgResult) (*stream.RawMessage, error) { + local.Logger.Info("MsgResult") + + run, err := findRunSpecUiid(msg.Token) + if err != nil { + return lostProcess(msg.Token), nil + } + + if err = parseCosts(msg.Costs); err != nil { + local.Logger.Error(data.SMsgRunnerBadDirectiveResult, zap.Error(err)) + } + + // inputCost:0.00183925 outputCost:0.0001905 inputTokens:7357 outputTokens:254 cacheWriteInputTokens:0 cacheReadInputTokens:0 + if msg.Result == data.RunStatusError || msg.Result == data.RunStatusFailed { + if run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Decorated.Soft { + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Result = msg.Result + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Error = msg.Info + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Time = time.Since(startCaseTime).Milliseconds() + return nextCase(run) + } else { + return doMsgDoneError(run, msg.Info, msg.Result) + } + } + + switch run.CurrentState { + case scommon.RunStateOrderDirective: + if run.CurrentDirectiveIdx == scommon.DirectiveFlagDone { + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Result = msg.Result // TODO do we need to derive this? + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Time = time.Since(startCaseTime).Milliseconds() + return nextCase(run) + } + return doMsgDirective(run) + } + + local.Logger.Error("BUG: this should have never happened. Unknown state for MsgResult.") + return doMsgDoneError(run, "Done, bug encountered in server. Administrators notified.", data.RunStatusError) +} + +func msgClosing(msg *stream.MsgClosing) (*stream.RawMessage, error) { + + run, err := findRunSpecUiid(msg.Token) + if err != nil { + return wrapRawMessage(stream.NewMsgFault("")), err + } + + moveFromRunningToHistory(run) + + switch run.CurrentState { + case scommon.RunStateStart: + processLog(run, msg.Log) + sendStatus(run, data.RunStatusError, msg.Msg) + skipRemaining(run, data.RunStatusError) + local.Logger.Error(data.SMsgRunnerTerminatingStartError, zap.String(base.LM_CAUSE, msg.Msg)) + return &stream.RawMessage{ + MType: stream.MsgTypeDone, + Data: stream.NewMsgDone(getKey(run.Uid, run.Id)), + }, nil + + case scommon.RunStateDone: + sendCurrentFlow(run, 0) + sendCurrentCase(run, 0) + processLog(run, msg.Log) + if run.CurrentFlowIdx != 0 && run.CurrentCaseIdx != 0 { + // This keeps it from being set if the indexes have been reset (usually meaning graceful completion) or if nothing has happened. + run.Order.Flows[run.CurrentFlowIdx].Cases[run.CurrentCaseIdx].Costs = formatCosts() // This must happen before shipOrders. + } + shipOrders(run, true) + return &stream.RawMessage{ + MType: stream.MsgTypeDone, + Data: stream.NewMsgDone(getKey(run.Uid, run.Id)), + }, nil + } + + local.Logger.Error("BUG: this should have never happened. Unknown state for MsgResult.") + return doMsgDoneError(run, "Done, bug encountered in server. Administrators notified.", data.RunStatusError) +} + +func msgFault(msg *stream.MsgFault) (*stream.RawMessage, error) { + run, err := findRunSpecUiid(msg.Token) + if err != nil { + return lostProcess(msg.Token), nil + } + local.Logger.Error(data.SMsgRunnerFault, zap.String(base.LM_UID, run.Uid), zap.String(base.LM_ID, run.Id), zap.Error(err)) + return doMsgDoneError(run, "Done. Failed due to runner fault.", data.RunStatusError) +} + +func MsgRecipient(message *stream.RawMessage) (*stream.RawMessage, error) { + local.Logger.Info(data.SMsgRunnerMsgRx, zap.Int(base.LM_TYPE, int(message.MType))) + + var ( + r *stream.RawMessage + ) + + defer func() { + if r := recover(); r != nil { + // Get stack trace + buf := make([]byte, 4096) + n := runtime.Stack(buf, false) + stackTrace := string(buf[:n]) + + // TODO do we need to fault the client? + // Log the panic + local.Logger.Error(data.SMsgRunnerMsgRxPanic, + zap.Any(base.LM_CAUSE, r), zap.String("stack_trace", stackTrace), + ) + } + }() + + m, err := stream.ExtractMessage(message) + if err != nil { + return wrapRawMessage(stream.NewMsgFault("")), err + } + + switch m := m.(type) { + case *stream.MsgReady: + r, err = msgReady(m) + case *stream.MsgResult: + r, err = msgResult(m) + case *stream.MsgClosing: + r, err = msgClosing(m) + case *stream.MsgFault: + r, err = msgFault(m) + default: + return nil, fmt.Errorf("unknown message type: %T", m) + } + + if err != nil { + r = wrapRawMessage(stream.NewMsgFault("")) + local.Logger.Error(data.SMsgRunnerMsgRxFatal, zap.Error(err)) + } + return r, err + +} diff --git a/magnitude/service/service_setup.go b/runner/service/service_setup.go similarity index 76% rename from magnitude/service/service_setup.go rename to runner/service/service_setup.go index a4fdf73eb8248409aeacf2af25707a8a353cd179..f09bee4aa953d017a0403a0f65a18ba9c5da1ae5 100644 --- a/magnitude/service/service_setup.go +++ b/runner/service/service_setup.go @@ -11,7 +11,8 @@ package service import ( "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/wwwherd/common/stream" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/local" + "gitlab.com/ginfra/wwwherd/runner/providers/message" "os" "path/filepath" "time" @@ -20,7 +21,7 @@ import ( // ##################################################################################################################### // # SETUP -func MagnitudeSetup(client stream.StreamClient) error { +func RunnerSetup(client stream.StreamClient, server message.MessageProvider) error { if !MagLock.Lock(time.Minute * 2) { return base.NewGinfraError("Setup failed due to deadlock.") } @@ -37,10 +38,8 @@ func MagnitudeSetup(client stream.StreamClient) error { time.Sleep(time.Second) // -- Nuke any running tests -------------------------------------------------------------- - for item := range running.IterBuffered() { - for runitem := range item.Val.IterBuffered() { - runitem.Val.Kill() - } + for _, item := range running { + item.Kill() } initStore() @@ -52,11 +51,17 @@ func MagnitudeSetup(client stream.StreamClient) error { panic(err) } + if err = server.Start(MsgRecipient); err != nil { + panic(err) + } + + SetupDispatch(client) + // -- Setup complete. Run it. --------------------------------------------------------------------- setupComplete = true - local.Logger.Info("Magnitude setup complete.") + local.Logger.Info("Runner setup complete.") if !loopRunning { - local.Logger.Info("Magnitude setup starting run loop.") + local.Logger.Info("Runner setup starting run loop.") loopRunning = true go runloop(client) } diff --git a/runner/service/service_store.go b/runner/service/service_store.go new file mode 100644 index 0000000000000000000000000000000000000000..7eeb51b3ea2423593ceeda23a5b90c495fd92024 --- /dev/null +++ b/runner/service/service_store.go @@ -0,0 +1,261 @@ +/* +Package service +Copyright (c) 2025 Erich Gatejen +[LICENSE]: https://www.apache.org/licenses/LICENSE-2.0.txt, if unaltered. You are free to alter and apply any license +you wish to this file. + +Work management. +*/ +package service + +import ( + "gitlab.com/ginfra/ginfra/base" + "gitlab.com/ginfra/wwwherd/runner/local" + scommon "gitlab.com/ginfra/wwwherd/runner/service/common" + "go.uber.org/zap" + "os" + "strings" + "sync" +) + +var ( + + // TODO these were cmap, but it was spinning the CPU going through the shards. This actually performs a lot better, but I suspect there is something more fundamental at work. + runQueue []*scommon.RunSpec + running = make(map[string]*scommon.RunSpec) + runHistory = make(map[string]*scommon.RunSpec) + + runQueueMu sync.RWMutex + runningMu sync.RWMutex + runHistoryMu sync.RWMutex +) + +func initStore() { + runQueueMu.Lock() + runningMu.Lock() + runHistoryMu.Lock() + defer runQueueMu.Unlock() + defer runningMu.Unlock() + defer runHistoryMu.Unlock() + + running = make(map[string]*scommon.RunSpec) + runHistory = make(map[string]*scommon.RunSpec) +} + +func getKey(uid, id string) string { + return uid + "_" + id +} + +// -- RUN QUEUE --------------------------------------------------------------------------------------------------- + +func runQueuePut(run *scommon.RunSpec) { + runQueueMu.Lock() + defer runQueueMu.Unlock() + runQueue = append(runQueue, run) +} + +func runQueueArray(uid string) []*scommon.RunSpec { + runQueueMu.RLock() + defer runQueueMu.RUnlock() + + // Make a copy + specs := make([]*scommon.RunSpec, 0) + for _, item := range runQueue { + if item.Uid == uid { + specs = append(specs, item) + } + } + return specs +} + +func runQueueRemove(uid, id string) *scommon.RunSpec { + runQueueMu.Lock() + defer runQueueMu.Unlock() + + for i, run := range runQueue { + if run.Uid == uid && run.Id == id { + // Found the item, remove it from the slice + removed := run + runQueue = append(runQueue[:i], runQueue[i+1:]...) + return removed + } + } + + // Not found + return nil +} + +func runQueueDequeue() *scommon.RunSpec { + runQueueMu.Lock() + defer runQueueMu.Unlock() + + if len(runQueue) == 0 { + return nil + } + + // Get the first item (FIFO) + run := runQueue[0] + + // Remove it from the queue + runQueue = runQueue[1:] + + return run +} + +func runQueueLen() int { + runQueueMu.Lock() + defer runQueueMu.Unlock() + return len(runQueue) +} + +// -- RUNNING --------------------------------------------------------------------------------------------------- + +func runningPut(run *scommon.RunSpec) { + runningMu.Lock() + defer runningMu.Unlock() + running[getKey(run.Uid, run.Id)] = run +} + +func runningLen() int { + runningMu.Lock() + defer runningMu.Unlock() + return len(running) +} + +func runningGetArray(uid string) []*scommon.RunSpec { + runningMu.RLock() + defer runningMu.RUnlock() + + specs := make([]*scommon.RunSpec, 0) + for _, item := range running { + if item.Uid == uid { + specs = append(specs, item) + } + } + return specs +} + +// -- HISTORY -------------------------------------------------------------------------------------------------- + +func historyPut(run *scommon.RunSpec) { + runHistoryMu.Lock() + defer runHistoryMu.Unlock() + runHistory[getKey(run.Uid, run.Id)] = run +} + +func historyRemove(uid, id string) *scommon.RunSpec { + runHistoryMu.Lock() + defer runHistoryMu.Unlock() + + var r, _ = runHistory[getKey(uid, id)] + delete(runHistory, getKey(uid, id)) + return r +} + +func historyGetArray(uid string) []*scommon.RunSpec { + runHistoryMu.RLock() + defer runHistoryMu.RUnlock() + + specs := make([]*scommon.RunSpec, 0) + for _, item := range runHistory { + if item.Uid == uid { + specs = append(specs, item) + } + } + return specs +} + +// -- ALL -------------------------------------------------------------------------------------- + +func moveFromRunningToHistory(run *scommon.RunSpec) { + // Lock both mutexes to ensure atomic operation + runningMu.Lock() + runHistoryMu.Lock() + defer runningMu.Unlock() + defer runHistoryMu.Unlock() + + // Add to history + runHistory[getKey(run.Uid, run.Id)] = run + + // Remove from running + delete(running, getKey(run.Uid, run.Id)) + + // Handle any file store. Right now we just delete it all. + if run.Path != "" { + err := os.RemoveAll(run.Path) + if err != nil { + local.Logger.Error("failed to delete directory: %s, error: %v", zap.Error(err)) + } + } + +} + +func findRunSpecUiid(token string) (*scommon.RunSpec, error) { + // Find the uiid + uiid, ok := tokens[token] + if !ok { + return nil, base.NewGinfraErrorAFlags("Unknown token.", base.ERRFLAG_BAD_ID, base.LM_TOKEN, token) + } + + // Extract uid and id from uiid using the reverse of getKey + parts := strings.Split(uiid, "_") + if len(parts) != 2 { + return nil, base.NewGinfraErrorAFlags("Invalid uiid format. Expected format: uid_id", base.ERRFLAG_BAD_ID, + base.LM_ID, uiid) + } + + uid := parts[0] + id := parts[1] + + // Use the existing findRunSpec function + return findRunSpec(uid, id) +} + +func findRunSpec(uid, id string) (*scommon.RunSpec, error) { + // Check running + runningMu.RLock() + for _, item := range running { + if item.Id == id { + if item.Uid == uid { + runningMu.RUnlock() + return item, nil + } else { + return nil, base.NewGinfraErrorAFlags("RUNNING: Run does not belong to user.", base.ERRFLAG_BAD_ID, + base.LM_ID, id, base.LM_UID, uid) + } + } + } + runningMu.RUnlock() + + // Check history + runHistoryMu.RLock() + for _, item := range runHistory { + if item.Id == id { + if item.Uid == uid { + runHistoryMu.RUnlock() + return item, nil + } else { + return nil, base.NewGinfraErrorAFlags("RUNNING: Run does not belong to user.", base.ERRFLAG_BAD_ID, + base.LM_ID, id, base.LM_UID, uid) + } + } + } + runHistoryMu.RUnlock() + + // Check queue + runQueueMu.RLock() + for _, item := range runQueue { + if item.Id == id { + if item.Uid == uid { + runQueueMu.RUnlock() + return item, nil + } else { + return nil, base.NewGinfraErrorAFlags("RUNNING: Run does not belong to user.", base.ERRFLAG_BAD_ID, + base.LM_ID, id, base.LM_UID, uid) + } + } + } + runQueueMu.RUnlock() + + return nil, base.NewGinfraErrorAFlags("Id not found.", base.ERRFLAG_NOT_FOUND, base.LM_ID, id) +} diff --git a/magnitude/service/service_test.go b/runner/service/service_test.go similarity index 100% rename from magnitude/service/service_test.go rename to runner/service/service_test.go diff --git a/magnitude/service/service_values.go b/runner/service/service_values.go similarity index 96% rename from magnitude/service/service_values.go rename to runner/service/service_values.go index bec08fe87c570fd98f457871313e6b6c9f3a4abf..04e3b59f5811e0e71bc3974dda26f4804904fe8b 100644 --- a/magnitude/service/service_values.go +++ b/runner/service/service_values.go @@ -17,7 +17,7 @@ import ( "encoding/json" "fmt" "gitlab.com/ginfra/ginfra/base" - "gitlab.com/ginfra/wwwherd/magnitude/service/common" + "gitlab.com/ginfra/wwwherd/runner/service/common" "io" "os" "path/filepath" @@ -27,13 +27,9 @@ import ( // ##################################################################################################################### // # HARD CONFIGURATIONS AND DEFAULTS -const workerPoolSize = 10 +const runningPoolSize = 10 const maxRunningPerUid = 5 -const DefaultLLMModel = "claude-sonnet-4-20250514" -const DefaultLLMModelSelf = "Qwen2.5-VL-32B-Instruct-q4_k_m" -const SpecificsMoondream = "moondream" - // ##################################################################################################################### // # TYPES diff --git a/magnitude/service/util.go b/runner/service/util.go similarity index 93% rename from magnitude/service/util.go rename to runner/service/util.go index dbf9662e971f87168f22b39a4c98e5f3e0403d8e..d4d219c25c102c297ca706cab4b83d084dd5a5d0 100644 --- a/magnitude/service/util.go +++ b/runner/service/util.go @@ -13,7 +13,7 @@ import ( "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/ginfra/common/config" "gitlab.com/ginfra/ginfra/common/service/gotel" - "gitlab.com/ginfra/wwwherd/magnitude/local" + "gitlab.com/ginfra/wwwherd/runner/local" "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/trace" "go.uber.org/zap" @@ -24,8 +24,8 @@ import ( // InitService goes through the service initialization process. It will call initfn if it is the first service to do so. // It will call loadfn is not the first service. The idea is the first call can set everything up and the second can // just load the configuration information. -func InitService(service *Servicemagnitude, initfn func(service *Servicemagnitude) error, - loadfn func(service *Servicemagnitude) error) error { +func InitService(service *Servicerunner, initfn func(service *Servicerunner) error, + loadfn func(service *Servicerunner) error) error { const ( POLL_DELAY = 350 diff --git a/magnitude/specifics.json b/runner/specifics.json similarity index 79% rename from magnitude/specifics.json rename to runner/specifics.json index 50105e345d057e2247f730101ff9b2cd69b21ab9..e6cd3637b3fa6c4c3318524f99d8476497a157ff 100644 --- a/magnitude/specifics.json +++ b/runner/specifics.json @@ -1,10 +1,10 @@ { "meta": { "version": 1, - "name": "magnitude", + "name": "runner", "type": "nodejs", - "module_name": "gitlab.com/ginfra/wwwherd/magnitude", - "serviceclass": "magnitude", + "module_name": "gitlab.com/ginfra/wwwherd/runner", + "serviceclass": "runner", "invocationcmd": "" }, "exposed": { diff --git a/magnitude/src/index.js b/runner/src/index.js similarity index 100% rename from magnitude/src/index.js rename to runner/src/index.js diff --git a/runner/src/index.ts b/runner/src/index.ts new file mode 100644 index 0000000000000000000000000000000000000000..0b653834b0edfe11c8df333615b95bd1c5c71420 --- /dev/null +++ b/runner/src/index.ts @@ -0,0 +1,627 @@ +// WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 +import {startBrowserAgent, BrowserAgent} from 'magnitude-core'; +import * as fs from 'fs'; +import {start} from "node:repl"; + +// =================================================================================================================== +// How to use: +// ARG1: Token +// ARG2: controller URL +// ARG3: Target host url +// ARG4: Store directory +// ARG5: Provider name +// ARG6: apiKey +// ARG7: model +// ARG8: url to provider + +const token = process.argv[2]; +const controllerUrl = process.argv[3]; +const target = process.argv[4]; +const store = process.argv[5]; +const provider = process.argv[6]; +const apiKey = process.argv[7]; +const model = process.argv[8]; +const url = process.argv[9]; + +const logFile = `${store}/log.json`; +const dispFile = `${store}/disp.json`; + +const argData = { + target: target, + controllerUrl: controllerUrl, + store: store, + provider: provider, + apiKey: apiKey, + model: model, + url: url +}; + +fs.writeFileSync(dispFile, JSON.stringify(argData, null, 2) + '\n'); + +let fd = fs.openSync(logFile, 'w'); +fs.writeSync(fd, '[\n'); + + +// =================================================================================================================== +// Console output capture + +interface CapturedLogEntry { + timestamp: string; + type: string; + message: string; +} + +type ConsoleMethod = typeof console.log; +const createConsoleCapture = ( + type: string +): ConsoleMethod => { + return (arg: any): void => { + let processedArg: any; + + try { + // Check if arg is already a valid JSON object/string + if (typeof arg === 'object' && arg !== null) { + processedArg = arg; + } else if (typeof arg === 'string') { + // Try to parse as JSON first + try { + processedArg = JSON.parse(arg); + } catch { + // If not JSON, clean terminal formatting and keep as string + processedArg = arg.replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); + } + } else { + // For other types, convert to string and clean formatting + processedArg = String(arg).replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); + } + } catch (error) { + // If anything fails, convert to string representation + processedArg = String(arg).replace(/\x1B\[[0-9;]*[a-zA-Z]/g, ''); + } + + const logEntry: CapturedLogEntry = { + timestamp: new Date().toISOString(), + type: type, + message: processedArg + }; + + fs.writeSync(fd, JSON.stringify(logEntry, null, 2)+ ",\n"); + }; +}; + +// Override console methods to capture output +console.log = createConsoleCapture('LOG'); +console.error = createConsoleCapture('ERROR'); +console.warn = createConsoleCapture('WARN'); +console.info = createConsoleCapture('INFO'); + +// ==================================================================================================================== +// VALUES + +const StateOpen = 0; +const StateFlow = 1; +const StateCase = 2; +const StateDirective = 3; +const StateFault = 4; +const StateError = 5; + +let state = StateOpen; + +// Tnis must match common/stream/catalog.go enumeration. +const MsgTypeRunNoMessage = 0; +const MsgTypeRunStatus = 1; +const MsgTypeRunOrders = 2; +const MsgTypeRunCurrentFlow = 3; +const MsgTypeRunCurrentCase = 4; +const MsgTypeCancelComplete = 5; +const MsgTypeRunLog = 6; +const MsgTypeReady = 7; +const MsgTypeFault = 8; +const MsgTypeOrderFlow = 9; +const MsgTypeOrderCase = 10; +const MsgTypeOrderDirective = 11; +const MsgTypeResult = 12; +const MsgTypeDone = 13; +const MsgTypeClosing = 14; + +let currentFlowId = 0; +let currentCaseId = 0; + + +// ==================================================================================================================== +// Messages + +async function sendReady() { + return await makeHttpRequest(String(MsgTypeReady), { + "mtype": MsgTypeReady, + "token": token, + }); +} + +async function sendFault() { + return await makeHttpRequest(String(MsgTypeFault), { + "mtype": MsgTypeFault, + "token": token, + }); +} + +async function sendClosing(msg: string) { + return await makeHttpRequest(String(MsgTypeClosing), { + "mtype": MsgTypeClosing, + "token": token, + "msg": msg || "", + "log": getLog() + }); +} + +async function sendResult() { + if (pendingResult != null && pendingResult.status != "success") { + return await makeHttpRequest(String(MsgTypeResult), { + "mtype": MsgTypeResult, + "token": token, + "result": 9, // data.RunStatusFailed + "info": pendingResult?.reasoning || "", + "costs": pendingResult?.cost || "" + }); + } + return await makeHttpRequest(String(MsgTypeResult), { + "mtype": MsgTypeResult, + "token": token, + "result": 8, // data.RunStatusPassed + "info": "OK", + "costs": pendingResult?.cost || "" + }); +} + +// ==================================================================================================================== +// AGENT + +let agent : BrowserAgent | null = null; + +try { + agent = await startBrowserAgent({ + // Starting URL for agent + url: target || "http://localhost:8888", + // Show thoughts and actions + narrate: true, + // LLM configuration + llm: { + provider: provider as "anthropic" | "claude-code" | "aws-bedrock" | "google-ai" | "vertex-ai" | "openai" | "openai-generic" | "azure-openai", + options: { + model: model, + apiKey: apiKey, + baseUrl: url + } + }, + }); + agent.events.on('tokensUsed', (usage: any) => { + if (usage.inputTokens !== undefined) { + latestInputTokens += usage.inputTokens; + } + if (usage.inputCost !== undefined) { + latestInputCost += usage.inputCost; + } + if (usage.outputTokens !== undefined) { + latestOutputTokens += usage.outputTokens; + } + if (usage.outputCost !== undefined) { + latestOutputCost += usage.outputCost; + } + if (usage.cacheWriteInputTokens !== undefined) { + latestCacheWriteInputTokens += usage.cacheWriteInputTokens; + } + if (usage.cacheReadInputTokens !== undefined) { + latestCacheReadInputTokens += usage.cacheReadInputTokens; + } + }); +} catch (error) { + process.stderr.write("error starting agent: " + (error instanceof Error ? error.message : String(error)) + "\n"); + console.error(JSON.stringify({ + type: "AGENT", + status: "failed", + description: "starting agent", + reasoning: error instanceof Error ? error.message : String(error), + })) + let msg = error instanceof Error ? error.message : String(error); + if (msg.includes('ERR_CONNECTION_REFUSED')) { + msg = "Cannot connect to target host."; + } + await sendClosing(msg) + fs.closeSync(fd);// or other error handling +} + +//process.stdout.write(JSON.stringify(agent, null, 2) + '\n') + +let latestInputTokens = 0.0 +let latestOutputTokens = 0.0 +let latestInputCost = 0.0 +let latestOutputCost = 0.0 +let latestCacheWriteInputTokens = 0.0 +let latestCacheReadInputTokens = 0.0 + +function getCost() { + return "inputCost:" + latestInputCost + " outputCost:" + latestOutputCost + " inputTokens:" + latestInputTokens + + " outputTokens:" + latestOutputTokens + " cacheWriteInputTokens:" + latestCacheWriteInputTokens + + " cacheReadInputTokens:" + latestCacheReadInputTokens; +} + +let prompt = "" +let promptset = "" +let promptglobal = "" +let data: Record = {} + +// Function to perform agent action +async function act(order: string): Promise<{ status: string; reasoning: string, cost: string }> { + // TODO Not sure if magnitude will freak if either data or prompt it empty. We shall see. + + latestInputTokens = 0.0 + latestOutputTokens = 0.0 + latestInputCost = 0.0 + latestOutputCost = 0.0 + latestCacheWriteInputTokens = 0.0 + latestCacheReadInputTokens = 0.0 + + let result = { + status: "success", + reasoning: "", + cost: "" + } + + await agent.act(order, {data: data, prompt: promptglobal + ` ` + promptset + ` ` + prompt}).catch(error => + { result = { + status: "failed", + reasoning: error instanceof Error ? error.message : String(error), + cost: "" + }; + console.error(error instanceof Error ? error.message : String(error)) + }) + data = {}; + prompt = ""; + result.cost = getCost(); + + process.stdout.write(JSON.stringify(result, null, 2) + '\n') + + return result; +} + +const CHECK_INSTRUCTIONS=` +Given the actions of an LLM agent executing a test case, and a screenshot taken afterwards, evaluate whether the provided check "passes" i.e. holds true or not. + +Check to evaluate: +`.trim(); + +// Check doesn't scroll, we we have to just use act with the correct prompt. +async function check(description: string): Promise<{ status: string; reasoning: string }> { + return await act(CHECK_INSTRUCTIONS + description) +} + +// =================================================================================================================== +// DIRECTIVE + +enum FlowDirectiveType { + DirectiveAct, + DirectiveCheck, + DirectiveData, + DirectivePrompt, + DirectivePromptSet, + DirectivePromptClear, + DirectivePromptGlobal, + DirectivePromptGlobalClear, + DirectiveNav, + DirectivePing, + DirectiveRaw, + DirectiveUnknown +} + +let pendingResult: any | null = null + +async function doDirective(info: any) { + let line = info.text + let dir = info.type + + switch (dir) { + case FlowDirectiveType.DirectiveAct: + console.log("##ACT"); + pendingResult = await act(line); + console.log("##ACT DONE"); + break; + + case FlowDirectiveType.DirectiveCheck: + console.log("##CHECK"); + pendingResult = await check(line); + console.log("##CHECK DONE"); + break; + + case FlowDirectiveType.DirectiveData: + console.log("##DATA"); + try { + data = JSON.parse(line); + } catch (error) { + pendingResult = { + status: "failed", + reasoning: error instanceof Error ? error.message : String(error) + } + console.error('Error parsing JSON for ##DATA: ', error); + } + console.log("##DATA DONE"); + break; + + case FlowDirectiveType.DirectivePrompt: + console.log("##PROMPT"); + prompt = line; + pendingResult = null + console.log("##PROMPT DONE"); + break; + + case FlowDirectiveType.DirectivePromptSet: + console.log("##PROMPTSET"); + promptset = line; + pendingResult = null + console.log("##PROMPTSET DONE"); + break; + + case FlowDirectiveType.DirectivePromptClear: + console.log("##PROMPTCLEAR"); + promptset = "" + pendingResult = null + console.log("##PROMPTCLEAR DONE"); + break; + + case FlowDirectiveType.DirectiveNav: + console.log("##NAV"); + const resolvedAgent = await agent; + try { + await agent.nav(line.substring(6).trim()); + } catch (error) { + pendingResult = { + status: "failed", + reasoning: error instanceof Error ? error.message : String(error) + } + console.error('NAV ERROR:', error); + } + console.log("##NAV DONE"); + break; + + default: + console.log("Unused case directive: " + dir); + break; + + } +} + +// =================================================================================================================== +// Tool + +function sleep(ms: number): Promise { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +// Optional: Handle process termination signals +process.on('SIGINT', async (): Promise => { + fs.writeSync(fd, ' "order":"exit on terminate"\n ]\n'); + fs.closeSync(fd); + await (await agent).stop() + process.stderr.write("SIGINT stopping.\n"); + process.exit(0); +}); + +function getLog() { + const finalEntry: CapturedLogEntry = { + timestamp: new Date().toISOString(), + type: "LOG", + message: "Ending" + }; + fs.writeSync(fd, JSON.stringify(finalEntry, null, 2)+ "\n]\n"); + return fs.readFileSync(logFile, 'utf8'); +} + +// ==================================================================================================================== +// RUN LOOP + +let keepRunning = true; // Control variable for the loop + +// Function to stop the loop (you can call this to set keepRunning to false) +function stopHttpLoop(): void { + keepRunning = false; +} + +// Create an HTTP client function. Exceptions that leave this function are fatal to the runner. +async function makeHttpRequest(mid: string, requestBody: any): Promise { + let body = JSON.stringify(requestBody) + try { + let response = await fetch(controllerUrl, { + method: 'POST', + headers: { + 'Stream-Message-Id': mid, + 'Content-Type': 'application/json', + 'auth-token': token + }, + body: body + }); + + if (response.ok) { + let result = await response.json(); + process.stderr.write("http request response complete\n"); + return result; + } else { + process.stderr.write("http error\n"); + throw new Error(`HTTP Error: ${response.status} ${response.statusText}`); + } + } catch (error: any) { + process.stderr.write("http request error\n"); + throw new Error(`HTTP Error: ${error.message || error}`); + } +} + +// ==================================================================================================================== +// MAIN + +function startFlow(info: any) { + console.log('start flow: ' + info.name) + state = StateFlow; + promptglobal = info.prompt + currentFlowId = info.flowId +} + +function startCase(info: any) { + console.log('start case: ' + info.name) + state = StateCase; + promptglobal = info.prompt + promptset = "" + prompt = "" + currentCaseId = info.case_id +} + +async function doStop(): Promise { + process.stdout.write("doStop\n"); + await sendClosing("") + stopHttpLoop() +} + +async function doFault(info: string): Promise { + process.stdout.write("doFault:" + info + "\n"); + console.log('"fault": "' + info + '",') + await sendFault() + await doStop() +} + +let waitLoop = true + +// Main loop function +async function startLoop() { + + // This would probably be cleaner and more elegant with recursion, but there is no way to constrain the depth heading into it. + process.stdout.write("Start loop\n"); + while (keepRunning) { + try { + switch (state) { + + // -- STATE OPEN ----------------------------------------------------------------------------------------------- + case StateOpen: + process.stdout.write("StateOpen\n"); + let respo = await sendReady() + if (respo.mtype == MsgTypeOrderFlow) { + process.stdout.write("..... MsgTypeOrderFlow\n"); + startFlow(respo.data) + } else if (respo.mtype == MsgTypeDone) { + await doFault("No flow or cases to run.") + } else { + await doFault("Protocol violation. (Open)") + } + break + + // -- STATE FLOW ----------------------------------------------------------------------------------------------- + case StateFlow: + process.stdout.write("StateFlow\n"); + let respf = await sendReady() + if (respf.mtype == MsgTypeOrderCase) { + process.stdout.write("..... MsgTypeOrderCase\n"); + startCase(respf.data) + } else { + process.stdout.write("..... fault\n"); + await doFault("Protocol violation. (Flow)") + } + break + + // -- STATE CASE ----------------------------------------------------------------------------------------------- + case StateCase: + process.stdout.write("StateCase\n"); + let respc = await sendReady() + if (respc.mtype == MsgTypeOrderFlow) { + // Technically legal. + startFlow(respc.data) + } else if (respc.mtype == MsgTypeOrderCase) { + process.stdout.write("..... MsgTypeOrderCase\n"); + startCase(respc.data) + } else if (respc.mtype == MsgTypeOrderDirective) { + process.stdout.write("..... MsgTypeOrderDirective\n"); + state = StateDirective; + await doDirective(respc.data) + } else if (respc.mtype == MsgTypeDone) { + process.stdout.write("..... MsgTypeDone\n"); + await doStop() + } else { + process.stdout.write("..... fault\n"); + await doFault("Protocol violation. (Case)") + } + break + + // -- STATE DIRECTIVE ------------------------------------------------------------------------------------------ + case StateDirective: + process.stdout.write("StateDirective\n"); + let respd = await sendResult() + if (respd.mtype == MsgTypeOrderFlow) { + // Technically legal. + process.stdout.write("..... MsgTypeOrderFlow\n"); + startFlow(respd.data) + } else if (respd.mtype == MsgTypeOrderCase) { + process.stdout.write("..... MsgTypeOrderCase\n"); + startCase(respd.data) + } else if (respd.mtype == MsgTypeOrderDirective) { + process.stdout.write("..... MsgTypeOrderDirective\n"); + await doDirective(respd.data) + } else if (respd.mtype == MsgTypeDone) { + process.stdout.write("..... MsgTypeDone\n"); + await doStop() + } else { + process.stdout.write("..... fault\n"); + await doFault("Protocol violation. (Directive) m=" + respd.mtype) + } + break + + // -- END -------------------------------------------------------------------------------------------------------- + } + } catch (error) { + process.stderr.write("Exception in processing:" + (error instanceof Error ? error.message : String(error)) + "\n"); + // Attempt to fault. + await doFault("Processing error") + return + } + } + process.stderr.write("leaving keepRunning\n"); + waitLoop = false +} + +async function main() { + await startLoop(); + process.stderr.write('Normal main stop'); + process.exit(0); +} + +main().catch(error => { + process.stderr.write('Error in main:' + error.message + "\n"); + process.exit(1); +}); + +function delay(ms: number): Promise { + return new Promise(resolve => setTimeout(resolve, ms)); +} + +(async () => { + let waitLoop = true; + + console.log("Starting."); + + // Block the main thread until waitLoop becomes false + while (waitLoop) { + await delay(100); // Check every 100ms + } + fs.closeSync(fd); +})(); + + + + + + + + + + + + + + + + + diff --git a/magnitude/src/run.sh b/runner/src/run.sh old mode 100644 new mode 100755 similarity index 73% rename from magnitude/src/run.sh rename to runner/src/run.sh index ad1b9587c808024fec07e009b65a6ef6eba37820..4ab91ee533f8b22b072407b86044ff73ee2fec58 --- a/magnitude/src/run.sh +++ b/runner/src/run.sh @@ -7,7 +7,7 @@ # Capture the npm command output and exit code -npm exec tsx src/index.ts $1 $2 $3 $4 $5 $6 $7 2>&1 | tee /build/tmp/npm_output.txt +npm exec tsx src/index.ts $1 $2 $3 $4 $5 $6 $7 $8 $8 $9 2>&1 | tee /build/tmp/npm_output.txt EXIT_CODE=${PIPESTATUS[0]} diff --git a/magnitude/start_local.cmd b/runner/start_local.cmd similarity index 100% rename from magnitude/start_local.cmd rename to runner/start_local.cmd diff --git a/magnitude/start_local.sh b/runner/start_local.sh similarity index 100% rename from magnitude/start_local.sh rename to runner/start_local.sh diff --git a/magnitude/taskfile.yaml b/runner/taskfile.yaml similarity index 100% rename from magnitude/taskfile.yaml rename to runner/taskfile.yaml diff --git a/magnitude/taskfilecustom.yaml b/runner/taskfilecustom.yaml similarity index 66% rename from magnitude/taskfilecustom.yaml rename to runner/taskfilecustom.yaml index cdeccf8345f103f32e931a2ead27db89a2e554a5..717b3bceb04bb845604d6a5d391507b54fc464bf 100644 --- a/magnitude/taskfilecustom.yaml +++ b/runner/taskfilecustom.yaml @@ -4,7 +4,7 @@ tasks: stubs: cmds: - - ginfra ginterface stubs openapi magnitude service service_test.yaml + - ginfra ginterface stubs openapi runner service service_test.yaml prime_local: run: once diff --git a/magnitude/test/README.md b/runner/test/README.md similarity index 100% rename from magnitude/test/README.md rename to runner/test/README.md diff --git a/magnitude/test/config.json b/runner/test/config.json similarity index 87% rename from magnitude/test/config.json rename to runner/test/config.json index 35439d8a05d5a1d3aa49aa4e2a2fa0736b903ea4..405e38f7394342c7b8036b44935326588e0b8d9a 100644 --- a/magnitude/test/config.json +++ b/runner/test/config.json @@ -7,11 +7,11 @@ "*": "localhost" }, "services": { - "magnitude/test": "http://localhost:8901/" + "runner/test": "http://localhost:8901/" } }, "services": { - "magnitude": { + "runner": { "management": { "auth": "XAWJYFFDIDRY", "instance": { @@ -27,9 +27,9 @@ "auth": "XAWJYFFDIDRY", "config": { "config_provider": "file://test/config.json", - "image": "magnitude", + "image": "runner", "port": 8901, - "service_class": "magnitude", + "service_class": "runner", "specifics": { "stream_client_target": "http://localhost:8900" } diff --git a/magnitude/test/test_config.json b/runner/test/test_config.json similarity index 97% rename from magnitude/test/test_config.json rename to runner/test/test_config.json index 7358883a7b2fd1331e185fcb66af48d356e4b523..23d2b47877959168c6421fe370c068b556beb414 100644 --- a/magnitude/test/test_config.json +++ b/runner/test/test_config.json @@ -146,7 +146,7 @@ }, "type": "host" }, - "magnitude": { + "runner": { "management": { "auth": "XKQNFQRBTFJG", "instance": { @@ -162,9 +162,9 @@ "auth": "XKQNFQRBTFJG", "config": { "config_provider": "http://deskgoat:5111", - "image": "ginfra/magnitude", + "image": "wwwherd/runner", "port": 8905, - "service_class": "magnitude", + "service_class": "runner", "specifics": { } }, diff --git a/magnitude/tsconfig.json b/runner/tsconfig.json similarity index 90% rename from magnitude/tsconfig.json rename to runner/tsconfig.json index 8cfda835daad939db87fd37b556ac1142c932a45..1c730e3afcab8d5873dce3a2cbb7b0023a96e26c 100644 --- a/magnitude/tsconfig.json +++ b/runner/tsconfig.json @@ -1,7 +1,7 @@ { "compilerOptions": { "target": "ES2022", - "module": "CommonJS", + "module": "ES2022", "lib": ["ES2022"], "outDir": "./dist", "rootDir": "./src", @@ -16,7 +16,7 @@ "allowSyntheticDefaultImports": true, "experimentalDecorators": true, "emitDecoratorMetadata": true, - "types": ["node", "magnitude-test"], + "types": ["node", "magnitude-test"] }, "include": [ "src/**/*", diff --git a/service/README.md b/service/README.md index b22e2f2c9393c6e3e9166a28cf9e40dbc8522313..20c2426eac6e6e5735153e56bda2df853615f466 100644 --- a/service/README.md +++ b/service/README.md @@ -5,53 +5,6 @@ controller. # Notes on running it. -## Standalone - -To run it standalone without using Ginfra. - -### First time setup and start. - -1- You will need a POSTGRES agent running somewhere, and you will need the postgres user connect URL. The URL will look -something like this: -``` -postgres://postgres:YjkyNDE4ZGJjMjQ1@myaddress.com:5432 -``` -2- Copy the file build/service_config.json.NEW to the repo root directory as service_config.json. -3- Copy the file test/config.json to test/working.json -4- Edit test/working.json and replace 'postgres://postgres:<<>>@<<>>:5432' with the url above. -5- cd to the repo root directory. Build and start the agent with the following commands -``` -go build -v -o build/ginfra_service.exe -build/ginfra_service.exe $PWD -``` -6- In another terminal, initialize the database with the following command. -``` -curl -X POST \ - -H "Content-Type: application/json" \ - -H "auth: AAAAAAAAAAAA" \ - -d '{ - "name": "wwwherd" - }' \ - http://localhost:8900/ginfra/datasource/init -``` - You should see a user named 'wwwherd' and a password in the output. -7- (TODO This step may not be necessary any more) Edit the file test.working.json and next to "postgres_url" add this entry. -``` -"postgres_user_url": "postgres://wwwherd:<<>>@192.168.1.41:5432/wwwherd" -``` - where <<>> is the password output from the previous command. -8- In the original terminal, kill the build/ginfra_service.exe command. Restart as before: -``` -build/ginfra_service.exe $PWD -``` - -You can now start and stop the service without any additional steps. Note that the URL for connecting to the user account -database should be in a local file called 'postgres_user_url'. - -## Local - -``` -`ginfra local start postgres_url postgres://postgres:YjkyNDE4ZGJjMjQ1@deskgoat:5432` -``` +This has been deprecated. See the main documentation. diff --git a/service/bin/prime_db.sh b/service/bin/prime_db.sh new file mode 100755 index 0000000000000000000000000000000000000000..bb680bfc5f5faddb64098f971f9a3e3dbf320367 --- /dev/null +++ b/service/bin/prime_db.sh @@ -0,0 +1,13 @@ +#!/usr/bin/env bash + +rm -f postgres_user_url || true +rm -f ../postgres_user_url || true + +curl -X POST \ + -H "Content-Type: application/json" \ + -H "auth: AAAAAAAAAAAA" \ + -d '{ + "name": "wwwherd" + }' \ + http://localhost:8900/ginfra/datasource/init + diff --git a/service/bin/service_restart.cmd b/service/bin/service_restart.cmd deleted file mode 100755 index c3cee42ca3afa15d3f8c6fadbc2fcc3174827c09..0000000000000000000000000000000000000000 --- a/service/bin/service_restart.cmd +++ /dev/null @@ -1,5 +0,0 @@ - -taskkill /IM node /F - -timeout /t 1 -bin\service_start.cmd $1 diff --git a/service/bin/service_start.cmd b/service/bin/service_start.cmd deleted file mode 100755 index ef36300028d93d5cad018c96d1edc282921a0656..0000000000000000000000000000000000000000 --- a/service/bin/service_start.cmd +++ /dev/null @@ -1,5 +0,0 @@ - -REM This file will not be updated so any changes are safe. - -start /B node agent.ts ^1^> log/agent.log ^2^>^&^1 - diff --git a/service/local/values.go b/service/local/values.go index d203db582b2798a4e9b612ca4d7ad5792c587c5c..6cdbf36728b8611c4836fb2a2e89e25c50d2a6ed 100644 --- a/service/local/values.go +++ b/service/local/values.go @@ -34,7 +34,7 @@ var ( const ( ConfigPostgressAdminUrl = "postgres_url" ConfigPostgressUserUrl = "postgres_user_url" - ConfigManitudeAuth = "magnitude_auth" + ConfigRunnerAuth = "runner_auth" UserTokenLength = 12 diff --git a/service/package-lock.json b/service/package-lock.json index 9444e5ca4acf984ec78f4f7d4de3d9d5973116c8..ce94b899a8183f71815c820fc754650010cd7631 100644 --- a/service/package-lock.json +++ b/service/package-lock.json @@ -10,6 +10,7 @@ "dependencies": { "bcrypt": "^6.0.0", "cors": "^2.8.5", + "echarts": "^5.6.0", "express": "^5.1.0", "jsonwebtoken": "^9.0.2", "node-fetch": "^3.3.2", @@ -17,6 +18,7 @@ "pg": "^8.16.3", "pinia": "^3.0.3", "vue": "^3.5.17", + "vue-echarts": "^7.0.3", "vue-router": "4.5.1", "vuedraggable": "^4.1.0", "vuetify": "^3.9.0", @@ -752,6 +754,22 @@ "safe-buffer": "^5.0.1" } }, + "node_modules/echarts": { + "version": "5.6.0", + "resolved": "https://registry.npmjs.org/echarts/-/echarts-5.6.0.tgz", + "integrity": "sha512-oTbVTsXfKuEhxftHqL5xprgLoc0k7uScAwtryCgWF6hPYFLRwOUHiFmHGCBKP5NPFNkDVopOieyUqYGH8Fa3kA==", + "license": "Apache-2.0", + "dependencies": { + "tslib": "2.3.0", + "zrender": "5.6.1" + } + }, + "node_modules/echarts/node_modules/tslib": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.3.0.tgz", + "integrity": "sha512-N82ooyxVNm6h1riLCoyS9e3fuJ3AMG2zIZs2Gd1ATcSFjSA23Q0fzjjZeh0jbJvWVDZ0cJT8yaNNaaXHzueNjg==", + "license": "0BSD" + }, "node_modules/ee-first": { "version": "1.1.1", "resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz", @@ -2540,6 +2558,51 @@ } } }, + "node_modules/vue-demi": { + "version": "0.13.11", + "resolved": "https://registry.npmjs.org/vue-demi/-/vue-demi-0.13.11.tgz", + "integrity": "sha512-IR8HoEEGM65YY3ZJYAjMlKygDQn25D5ajNFNoKh9RSDMQtlzCxtfQjdQgv9jjK+m3377SsJXY8ysq8kLCZL25A==", + "hasInstallScript": true, + "license": "MIT", + "bin": { + "vue-demi-fix": "bin/vue-demi-fix.js", + "vue-demi-switch": "bin/vue-demi-switch.js" + }, + "engines": { + "node": ">=12" + }, + "funding": { + "url": "https://github.com/sponsors/antfu" + }, + "peerDependencies": { + "@vue/composition-api": "^1.0.0-rc.1", + "vue": "^3.0.0-0 || ^2.6.0" + }, + "peerDependenciesMeta": { + "@vue/composition-api": { + "optional": true + } + } + }, + "node_modules/vue-echarts": { + "version": "7.0.3", + "resolved": "https://registry.npmjs.org/vue-echarts/-/vue-echarts-7.0.3.tgz", + "integrity": "sha512-/jSxNwOsw5+dYAUcwSfkLwKPuzTQ0Cepz1LxCOpj2QcHrrmUa/Ql0eQqMmc1rTPQVrh2JQ29n2dhq75ZcHvRDw==", + "license": "MIT", + "dependencies": { + "vue-demi": "^0.13.11" + }, + "peerDependencies": { + "@vue/runtime-core": "^3.0.0", + "echarts": "^5.5.1", + "vue": "^2.7.0 || ^3.1.1" + }, + "peerDependenciesMeta": { + "@vue/runtime-core": { + "optional": true + } + } + }, "node_modules/vue-router": { "version": "4.5.1", "resolved": "https://registry.npmjs.org/vue-router/-/vue-router-4.5.1.tgz", @@ -2673,6 +2736,21 @@ "engines": { "node": ">=0.4" } + }, + "node_modules/zrender": { + "version": "5.6.1", + "resolved": "https://registry.npmjs.org/zrender/-/zrender-5.6.1.tgz", + "integrity": "sha512-OFXkDJKcrlx5su2XbzJvj/34Q3m6PvyCZkVPHGYpcCJ52ek4U/ymZyfuV1nKE23AyBJ51E/6Yr0mhZ7xGTO4ag==", + "license": "BSD-3-Clause", + "dependencies": { + "tslib": "2.3.0" + } + }, + "node_modules/zrender/node_modules/tslib": { + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.3.0.tgz", + "integrity": "sha512-N82ooyxVNm6h1riLCoyS9e3fuJ3AMG2zIZs2Gd1ATcSFjSA23Q0fzjjZeh0jbJvWVDZ0cJT8yaNNaaXHzueNjg==", + "license": "0BSD" } } } diff --git a/service/package.json b/service/package.json index 6fc41c833f7fe163c431e4c6ce52c4b7b79061d3..c6acc32622afb275cbee7b5ef8fd3fc3df84f02e 100644 --- a/service/package.json +++ b/service/package.json @@ -12,6 +12,7 @@ "dependencies": { "bcrypt": "^6.0.0", "cors": "^2.8.5", + "echarts": "^5.6.0", "express": "^5.1.0", "jsonwebtoken": "^9.0.2", "node-fetch": "^3.3.2", @@ -19,6 +20,7 @@ "pg": "^8.16.3", "pinia": "^3.0.3", "vue": "^3.5.17", + "vue-echarts": "^7.0.3", "vue-router": "4.5.1", "vuedraggable": "^4.1.0", "vuetify": "^3.9.0", diff --git a/service/providers/psql/schema/full.sql b/service/providers/psql/schema/full.sql index bc48fff2e55b75f7a297bc60241552c977e78003..7f8843b26c3df0afbe1b4b209c716c9804cf98b9 100644 --- a/service/providers/psql/schema/full.sql +++ b/service/providers/psql/schema/full.sql @@ -36,7 +36,7 @@ VALUES ('live'), CREATE TABLE accounts ( id SERIAL PRIMARY KEY, - username TEXT UNIQUE NOT NULL, + username TEXT NOT NULL, firstname TEXT, lastname TEXT, email TEXT, @@ -50,13 +50,27 @@ CREATE TABLE accounts REFERENCES account_roles (id), state_id INT NOT NULL, CONSTRAINT fkacc_state FOREIGN KEY (state_id) - REFERENCES account_state (id) + REFERENCES account_state (id), + CONSTRAINT fkacc_username_org UNIQUE (username, org_id) ); -- The password and hash must be replaced since it isn't hashed yet. INSERT INTO accounts (username, hpassword, org_id, roll_id, state_id) VALUES ('admin', 'aaaaaaaa', 1, 1, 1); +CREATE TABLE tokens +( + id SERIAL PRIMARY KEY, + name TEXT NOT NULL, + token TEXT UNIQUE NOT NULL, + create_date DATE NOT NULL DEFAULT CURRENT_DATE, + expire_date DATE NOT NULL, + org_id INT NOT NULL, + CONSTRAINT fktok_org FOREIGN KEY (org_id) + REFERENCES organizations (id), + CONSTRAINT fktok_unique_name_org UNIQUE (name, org_id) +); + CREATE TABLE provider_types ( id SERIAL PRIMARY KEY, @@ -161,6 +175,91 @@ CREATE TABLE case_tags REFERENCES tags (id) ); +CREATE TABLE case_history +( + id SERIAL PRIMARY KEY, + case_id INT NOT NULL, + case_text TEXT NOT NULL, + decorations TEXT, + retired BOOLEAN NOT NULL DEFAULT FALSE, + change_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, + org_id INT NOT NULL, + CONSTRAINT fkcase_hist_org FOREIGN KEY (org_id) + REFERENCES organizations (id) +); + +-- Index for better query performance when searching by case_id and date +CREATE INDEX idx_case_history_case_id_date ON case_history (case_id, change_date DESC); +-- Index for organization-based queries +CREATE INDEX idx_case_history_org_id ON case_history (org_id); + +CREATE TABLE case_bug_status +( + id SERIAL PRIMARY KEY, + name TEXT UNIQUE NOT NULL +); + +INSERT INTO case_bug_status (name) +VALUES ('new'), -- 1 + ('open'), -- 2 + ('fixed'), -- 3 + ('duplicate'), -- 4 + ('wont_fix'); -- 5 + +CREATE TABLE case_bug_types +( + id SERIAL PRIMARY KEY, + name TEXT UNIQUE NOT NULL +); + +INSERT INTO case_bug_types (name) +VALUES ('bug'), -- 1 + ('feature'), -- 2 + ('performance'), -- 3 + ('cost'), -- 4 + ('obsolete'); -- 5 + +CREATE TABLE case_bug_priority +( + id SERIAL PRIMARY KEY, + name TEXT UNIQUE NOT NULL +); + +INSERT INTO case_bug_priority (name) +VALUES ('critical'), -- 1 + ('high'), -- 2 + ('medium'), -- 3 + ('low'), -- 4 + ('informational'); -- 5 + +CREATE TABLE case_bugs +( + id SERIAL PRIMARY KEY, + case_id INT NOT NULL, + type INT NOT NULL, + status INT NOT NULL, + priority INT NOT NULL, + description TEXT, + info TEXT, + link TEXT, + reporter TEXT NOT NULL, + entry_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, + update_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, + org_id INT NOT NULL, + CONSTRAINT fkcb_case FOREIGN KEY (case_id) + REFERENCES cases (id), + CONSTRAINT fkcb_type FOREIGN KEY (type) + REFERENCES case_bug_types (id), + CONSTRAINT fkcb_status FOREIGN KEY (status) + REFERENCES case_bug_status (id), + CONSTRAINT fkcb_priority FOREIGN KEY (priority) + REFERENCES case_bug_priority (id), + CONSTRAINT fkcb_status_orgid FOREIGN KEY (org_id) + REFERENCES organizations (id), + CONSTRAINT fkcb_reporter FOREIGN KEY (reporter, org_id) + REFERENCES accounts (username, org_id) +); + -- FLOWS --------------------------------------------------------------------------- CREATE TABLE flows @@ -304,7 +403,6 @@ CREATE TABLE runs_history -- Executor status: internal target_node TEXT, - CONSTRAINT runhis_unique_name_per_org UNIQUE (name, org_id), CONSTRAINT fkrunhis_org_id FOREIGN KEY (org_id) REFERENCES organizations (id), CONSTRAINT fkrunhis_status FOREIGN KEY (run_status) @@ -358,6 +456,15 @@ CREATE TABLE case_stats time_ms INT NOT NULL, provider_id INT NOT NULL, + cost_ic FLOAT DEFAULT 0, -- inputCost + cost_oc FLOAT DEFAULT 0, -- outputCost + cost_it INT DEFAULT 0, -- inputTokens + cost_ot INT DEFAULT 0, -- outputTokens + cost_cwit INT DEFAULT 0, -- cacheWriteInputTokens + cost_crit INT DEFAULT 0, -- cacheReadInputTokens + + entry_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT fkcase_stat_org FOREIGN KEY (org_id) REFERENCES organizations (id) ); @@ -374,6 +481,15 @@ CREATE TABLE flow_stats time_ms INT NOT NULL, provider_id INT NOT NULL, + cost_ic FLOAT DEFAULT 0, -- inputCost + cost_oc FLOAT DEFAULT 0, -- outputCost + cost_it INT DEFAULT 0, -- inputTokens + cost_ot INT DEFAULT 0, -- outputTokens + cost_cwit INT DEFAULT 0, -- cacheWriteInputTokens + cost_crit INT DEFAULT 0, -- cacheReadInputTokens + + entry_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, + CONSTRAINT fkflow_stat_org FOREIGN KEY (org_id) REFERENCES organizations (id) ); diff --git a/service/providers/psql/wwwherd_runs.go b/service/providers/psql/wwwherd_runs.go index dfd147299915e70c399a15654ade950280d39cd6..269ffbdce61bf697654a385bf0c8f740712d15da 100644 --- a/service/providers/psql/wwwherd_runs.go +++ b/service/providers/psql/wwwherd_runs.go @@ -284,7 +284,7 @@ func (p *providerDirect) GetProviderInfo(provID int) (*local.DbProvider, error) &provider.Name, &nullURL, &provider.Type, - &nullURL, + &nullModel, &provider.AuthKey, ) if err != nil { @@ -297,7 +297,7 @@ func (p *providerDirect) GetProviderInfo(provID int) (*local.DbProvider, error) provider.URL = "" } if nullModel.Valid { - provider.URL = nullModel.String + provider.Model = nullModel.String } else { provider.Model = "" } diff --git a/service/providers/psql/wwwherd_stats.go b/service/providers/psql/wwwherd_stats.go index 3ec65f375d0620a289f5299ca9e6d7d95712b1de..83585fdbd88f864f0083526f1848975f5a0964eb 100644 --- a/service/providers/psql/wwwherd_stats.go +++ b/service/providers/psql/wwwherd_stats.go @@ -44,8 +44,9 @@ func (p *providerDirect) PushStatsFlow(orgId int, data []interface{}) error { // Use json_populate_recordset for bulk insert query := ` - INSERT INTO flow_stats (org_id, flow_id, run_id, result, time_ms, provider_id) - SELECT batch.org_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id + INSERT INTO flow_stats (org_id, flow_id, run_id, result, time_ms, provider_id, cost_ic, cost_oc, cost_it, cost_ot, cost_cwit, cost_crit) + SELECT batch.org_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id, + batch.cost_ic, batch.cost_oc, batch.cost_it, batch.cost_ot, batch.cost_cwit, batch.cost_crit FROM json_populate_recordset(null::flow_stats, $1) AS batch ` @@ -88,11 +89,12 @@ func (p *providerDirect) PushStatsCase(orgId int, data []interface{}) error { // Use json_populate_recordset for bulk insert query := ` - INSERT INTO case_stats (org_id, case_id, flow_id, run_id, result, time_ms, provider_id) - SELECT batch.org_id, batch.case_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id + INSERT INTO case_stats (org_id, case_id, flow_id, run_id, result, time_ms, provider_id, cost_ic, cost_oc, cost_it, cost_ot, cost_cwit, cost_crit) + SELECT batch.org_id, batch.case_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id, + batch.cost_ic, batch.cost_oc, batch.cost_it, batch.cost_ot, batch.cost_cwit, batch.cost_crit FROM json_populate_recordset(null::case_stats, $1) AS batch ` - + _, err = cnx.Pool.Exec(context.Background(), query, string(jsonData)) if err != nil { return base.NewGinfraErrorChild("PushStatsCase failed to execute insert", err) diff --git a/service/public/help/changecontent.html b/service/public/help/changecontent.html new file mode 100644 index 0000000000000000000000000000000000000000..d67e48efc8fc56c700defc5b46a6426c2fd8cac4 --- /dev/null +++ b/service/public/help/changecontent.html @@ -0,0 +1,195 @@ + + + + + + Understanding Change Content + + + +
+

Understanding Change Content

+ +
+ Change Content displays the differences between two versions of text or code, showing what was removed, added, or remained unchanged in a clear, color-coded format. +
+ +

Color-Coded Display

+ +
+
+
+ Light Red (#fddede): Removed content +
+
+
+ Light Green (#e9fbe9): Added content +
+
+
+ Light Gray (#f6f7fb): Unchanged content +
+
+ +

Symbol Indicators

+ +
+
    +
  • -- (double minus): Prefixes lines that have been removed
  • +
  • ++ (double plus): Prefixes lines that have been added
  • +
  • No prefix: Lines that remain unchanged
  • +
+
+ +

Example Change Content

+ +
+
function calculateTotal(items) {
+
-- let total = 0;
+
++ let total = 0.0;
+
for (let item of items) {
+
-- total += item.price;
+
++ total += item.price * item.quantity;
+
}
+
return total;
+
}
+
+ +

Key Features

+ +
+
    +
  • Visual Clarity: Color coding makes it easy to identify different types of changes at a glance
  • +
  • Symbol Prefixes: Clear indicators show the nature of each change
  • +
  • Context Preservation: Unchanged lines provide context around modifications
  • +
  • Line-by-line Comparison: Detailed view of exactly what changed
  • +
  • Easy Review: Facilitates quick understanding of modifications made
  • +
+
+ +

How to Read Change Content

+ +
+
    +
  1. Look for colors: Red backgrounds indicate removals, green backgrounds indicate additions
  2. +
  3. Check symbols: Lines with "--" were removed, lines with "++" were added
  4. +
  5. Review context: Gray lines show unchanged content for reference
  6. +
  7. Follow the flow: Read from top to bottom to understand the sequence of changes
  8. +
+
+ +
+

Tip: Change content is commonly used in version control systems, code reviews, and document comparison tools to help track modifications and collaborate effectively.

+
+
+ + diff --git a/service/server/admin.js b/service/server/admin.js index 78c0b0abe22d77d1722af5c5169db70468432ec3..422384aa6ba0343c5e578da9b724b1108bfb7f63 100644 --- a/service/server/admin.js +++ b/service/server/admin.js @@ -9,7 +9,7 @@ import bcrypt from 'bcrypt'; import {getClient} from './client.js'; export async function getUsers(orgId) { - let client = getClient() + let client = await getClient() try { const result = await client.query('SELECT * FROM accounts WHERE org_id = $1', [orgId]); const accounts = result.rows; // Store result.rows in accounts variable @@ -48,7 +48,7 @@ export async function addUser(orgId, username, password, roll_id, state_id, firs const saltRounds = 10; const hpassword = await bcrypt.hash(password, saltRounds); - let client = getClient() + let client = await getClient() try { const existingUser = await queryAccountById(orgId, username); if (existingUser) { @@ -81,7 +81,7 @@ export async function changeUserState(orgId, username, newState) { // Update the user's state in the database try { - let client = getClient() + let client = await getClient() const query = 'UPDATE accounts SET state_id = $3 WHERE username = $2 AND org_id = $1 RETURNING *'; const values = [orgId, username, numericStateId]; @@ -99,7 +99,7 @@ export async function changeUserState(orgId, username, newState) { } async function queryAccountById(orgId, username) { - let client = getClient() + let client = await getClient() try { const result = await client.query({ text: 'SELECT * FROM accounts WHERE username = $1 AND org_id = $2', @@ -131,7 +131,7 @@ export async function changeUserPassword(orgId, username, oldPassword, newPasswo } try { - let client = getClient(); + let client = await getClient(); // First, get the current password hash const getUserQuery = 'SELECT hpassword FROM accounts WHERE username = $2 AND org_id = $1'; @@ -173,10 +173,148 @@ export async function changeUserPassword(orgId, username, oldPassword, newPasswo } +// -- TOKEN ---------------------------------------------------------------------------------------------------------- + +export async function addToken(orgId, name, token, expireDate) { + if (!orgId) { + throw new Error('Organization ID cannot be null or empty'); + } + if (!name || name.trim() === '') { + throw new Error('Token name cannot be null or empty'); + } + if (!token || token.trim() === '') { + throw new Error('Token cannot be null or empty'); + } + if (!expireDate) { + throw new Error('Expire date cannot be null or empty'); + } + + let client = await getClient(); + try { + // Check if token name already exists for this org + const existingToken = await client.query( + 'SELECT id FROM tokens WHERE name = $1 AND org_id = $2', + [name, orgId] + ); + + if (existingToken.rowCount > 0) { + throw new Error('Token name already exists for this organization'); + } + + const result = await client.query( + 'INSERT INTO tokens (name, token, expire_date, org_id) VALUES ($1, $2, $3, $4) RETURNING *', + [name, token, expireDate, orgId] + ); + + logger.info(`Token '${name}' added successfully for organization ${orgId}`); + return result.rows[0]; + } catch (error) { + logger.error('Error adding token:', error); + throw error; + } +} + +export async function getTokensForOrg(orgId) { + if (!orgId) { + throw new Error('Organization ID cannot be null or empty'); + } + + let client = await getClient(); + try { + const result = await client.query( + 'SELECT id, name, create_date, expire_date FROM tokens WHERE org_id = $1 ORDER BY create_date DESC', + [orgId] + ); + + return result.rows; + } catch (error) { + logger.error('Error getting tokens for organization:', error); + throw error; + } +} + +export async function getTokenInfo(orgId, token) { + if (!token || token.trim() === '') { + throw new Error('Token cannot be null or empty'); + } + + let client = await getClient(); + try { + const result = await client.query( + 'SELECT name, expire_date FROM tokens WHERE token = $1 AND org_id = $2', + [token, orgId] + ); + + if (result.rowCount === 0) { + return null; + } + + return result.rows[0]; + } catch (error) { + logger.error('Error getting token info:', error); + throw error; + } +} + +export async function getTokenInfoNoOrg(token) { + if (!token || token.trim() === '') { + throw new Error('Token cannot be null or empty'); + } + + let client = await getClient(); + try { + const result = await client.query( + 'SELECT name, expire_date FROM tokens WHERE token = $1', + [token] + ); + + if (result.rowCount === 0) { + return null; + } + + return result.rows[0]; + } catch (error) { + logger.error('Error getting token info:', error); + throw error; + } +} + +export async function deleteToken(tokenId, orgId) { + if (!tokenId) { + throw new Error('Token ID cannot be null or empty'); + } + if (!orgId) { + throw new Error('Organization ID cannot be null or empty'); + } + + let client = await getClient(); + try { + const result = await client.query( + 'DELETE FROM tokens WHERE id = $1 AND org_id = $2 RETURNING name', + [tokenId, orgId] + ); + + if (result.rowCount === 0) { + throw new Error('Token not found or not authorized to delete'); + } + + logger.info(`Token '${result.rows[0].name}' deleted successfully for organization ${orgId}`); + return { success: true, deletedToken: result.rows[0].name }; + } catch (error) { + logger.error('Error deleting token:', error); + throw error; + } +} + export default { getUsers, addUser, changeUserState, changeUserPassword, - queryAccountById + queryAccountById, + addToken, + getTokensForOrg, + getTokenInfo, + deleteToken, + getTokenInfoNoOrg } \ No newline at end of file diff --git a/service/server/admin_rest.js b/service/server/admin_rest.js index e2d1bb786da72e70b7e523f4a2bc3004014fb309..8493a20ae8ab01b5917b912af8896c16662a4e8d 100644 --- a/service/server/admin_rest.js +++ b/service/server/admin_rest.js @@ -101,4 +101,83 @@ export async function addAdmin(app) { } }); + // -- Token management endpoints ------------------------------------------------------------------------------ + + app.post('/api/tokens', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const { name, token, expireDate } = req.body; + + if (!name || !token || !expireDate) { + return res.status(400).json({ + error: 'Name, token, and expire date are required' + }); + } + + const result = await proxy.addToken(1, name, token, expireDate); + res.json(result); + } catch (error) { + if (error.message.includes('already exists')) { + return res.status(409).json({ error: error.message }); + } + res.status(500).json({ error: error.message }); + } + }); + + app.get('/api/tokens', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const tokens = await proxy.getTokensForOrg(1); + res.json(tokens); + } catch (error) { + res.status(500).json({ error: error.message }); + } + }); + + app.get('/api/tokens/:token/info', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const { token } = req.params; + const tokenInfo = await proxy.getTokenInfo(1, token); + + if (!tokenInfo) { + return res.status(404).json({ error: 'Token not found' }); + } + + res.json(tokenInfo); + } catch (error) { + res.status(500).json({ error: error.message }); + } + }); + + app.delete('/api/tokens/:tokenId', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const tokenId = parseInt(req.params.tokenId); + + if (isNaN(tokenId)) { + return res.status(400).json({ + error: 'Invalid token ID: must be a valid number' + }); + } + + const result = await proxy.deleteToken(1, tokenId); + res.json(result); + } catch (error) { + if (error.message.includes('not found')) { + return res.status(404).json({ error: 'Token not found' }); + } + res.status(500).json({ error: error.message }); + } + }); + + } diff --git a/service/server/cattags.js b/service/server/cattags.js index 19b46fa7328dc903aba5a3665f1f113ccb3ab534..98ba17def666b897d5cde42128439bf3906f7375 100644 --- a/service/server/cattags.js +++ b/service/server/cattags.js @@ -10,7 +10,7 @@ import { getClient } from './client.js'; // - CATEGORIES AND TAGS async function queryAllCategoriesByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { const result = await client.query('SELECT * FROM categories WHERE org_id = $1', [orgId]); return result.rows; @@ -21,7 +21,7 @@ async function queryAllCategoriesByOrgId(orgId) { } async function queryAllTagsByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { const result = await client.query('SELECT * FROM tags WHERE org_id = $1', [orgId]); return result.rows; @@ -32,7 +32,7 @@ async function queryAllTagsByOrgId(orgId) { } async function addTag(orgId, name, color = null) { - let client = getClient() + let client = await getClient() try { const result = await client.query( 'INSERT INTO tags (org_id, name, color) VALUES ($1, $2, $3) RETURNING *', @@ -46,7 +46,7 @@ async function addTag(orgId, name, color = null) { } async function removeTag(orgId, tagId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -69,7 +69,7 @@ async function removeTag(orgId, tagId) { } async function updateTag(orgId, tagId, name, color = null) { - let client = getClient() + let client = await getClient() try { const result = await client.query( 'UPDATE tags SET name = $1, color = $2 WHERE id = $3 AND org_id = $4 RETURNING *', @@ -83,7 +83,7 @@ async function updateTag(orgId, tagId, name, color = null) { } async function addCategory(orgId, name, color, description = null) { - let client = getClient() + let client = await getClient() try { const result = await client.query( 'INSERT INTO categories (org_id, name, color, description) VALUES ($1, $2, $3, $4) RETURNING *', @@ -97,7 +97,7 @@ async function addCategory(orgId, name, color, description = null) { } async function updateCategory(orgId, categoryId, name, color, description = null) { - let client = getClient() + let client = await getClient() try { const result = await client.query( 'UPDATE categories SET name = $1, color = $2, description = $3 WHERE id = $4 AND org_id = $5 RETURNING *', diff --git a/service/server/client.js b/service/server/client.js index 8b716b6200c05cb2f72b5625b90f7a2eb3e0a9e6..c677ee063950a9f2dba0c5afd18cd9b7570b1325 100644 --- a/service/server/client.js +++ b/service/server/client.js @@ -54,10 +54,24 @@ async function closeConnection() { } } -export function getClient() { - if (!client) { - client = new Client(getPostgresUserUrl()); - client.connect(); +export async function getClient(pass = "") { + let url = getPostgresUserUrl() + if (pass) { + const match = url.match(/^postgres:\/\/([^:]+):([^@]+)@(.+)$/); + if (match) { + const [, username, password, hostAndDb] = match; + url = `postgres://postgres:${pass}@${hostAndDb}`; + } + let rootclient = new Client(url); + await rootclient.connect(); + return rootclient + + } else { + if (!client) { + // TODO change this to a pool + client = new Client(url); + await client.connect(); + } } return client; } diff --git a/service/server/common.js b/service/server/common.js index 0d9c8d1ebd8a2b2eb0133a7e9b58d4e31c88fd17..fa4c5c1eff8dc406b089fb2fd8e37237f2dbc6ee 100644 --- a/service/server/common.js +++ b/service/server/common.js @@ -5,6 +5,7 @@ import {activeAccounts, logger} from './store.js'; import jwt from 'jsonwebtoken'; +import proxy from "./proxy.js"; // Obviously, it is incredibly unsafe to not set in the environment, but for testing and demo it should be ok. export const JWT_SECRET = process.env.WWWHERD_JWT_SECRET || "7ZjeJaw72bM2akfbGUfevK2Y5I1jFMQdvY86tzmGPCRdfq6oMVdKzCr0QCxUr9t"; @@ -38,5 +39,116 @@ export const authenticateToken = (req, res, next) => { }); }; +// Token cache to store org_id numbers mapped by tokens +const tokenCache = new Map(); + +// JWT or rest token. +export const authenticateTokenOrToken = (req, res, next) => { + const authHeader = req.headers['authorization']; + const token = authHeader && authHeader.split(' ')[1]; + + if (token) { + jwt.verify(token, JWT_SECRET, (err, user) => { + if (err) { + logger.error('Bad login:', err); + return res.sendStatus(403); + } + req.user = user; + req.org_id = 0; + if (!activeAccounts.has(req.user.username)) { + logger.error('Bad user in login:', err); + return res.sendStatus(401); + } + next(); + }); + return + } + + // See if a token needs to be cached. + try { + const authtoken = req.headers['auth-token']; + + if (tokenCache.has(authtoken)) { + if (isTokenExpired(authtoken)) { + return res.sendStatus(406); + } else { + const ctoken = tokenCache.get(authtoken); + req.user = "" + req.org_id = ctoken.org_id; + next(); + return; + } + } + + const tokenInfo = proxy.getTokenInfoNoOrg(authtoken); + tokenInfo.then(data => { + if (!data) { + return res.sendStatus(403); + } + + setTokenCache(authtoken, data.org_id, data.expire_date); + if (isTokenExpired(authtoken)) { + return res.sendStatus(406); + } + req.user = "" + req.org_id = data.org_id; + next(); + + }).catch(error => { + logger.error('Error getting token info:', error); + return res.sendStatus(403); + }); + + } catch (error) { + logger.error('Error in token authentication:', error); + return res.sendStatus(500); + } +}; + + +// Helper functions for token cache management +export function setTokenCache(token, org_id, expiration_date) { + tokenCache.set(token, { + org_id: org_id, + expiration_date: expiration_date + }); +} + +export function getTokenCache(token) { + return tokenCache.get(token); +} + +export function hasTokenCache(token) { + return tokenCache.has(token); +} + +export function deleteTokenCache(token) { + return tokenCache.delete(token); +} + +export function clearTokenCache() { + tokenCache.clear(); +} + +export function isTokenExpired(token) { + const cacheEntry = tokenCache.get(token); + if (!cacheEntry) return true; + + const now = new Date(); + const expiration = new Date(cacheEntry.expiration_date); + return now > expiration; +} + +// TODO : clean based on LRU, since we want some expired tokens in the cache to resist any hits. +export function cleanupExpiredTokens() { + const now = new Date(); + for (const [token, data] of tokenCache.entries()) { + const expiration = new Date(data.expiration_date); + if (now > expiration) { + tokenCache.delete(token); + } + } +} + diff --git a/service/server/flowcase.js b/service/server/flowcase.js index babfcb3dbd4be7d3e390293d8d2bbcd97d893b12..f54350e0183c288d320940a37c3afbaacf1dc725 100644 --- a/service/server/flowcase.js +++ b/service/server/flowcase.js @@ -6,11 +6,14 @@ import {logger} from './store.js'; import { getClient } from './client.js'; +// --------------------------------------------------------------------------------------------------------------------- +// - CASE + async function queryCasesByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { let query = ` - SELECT cc.*, + SELECT cc.id, cc.name, cc.text, cc.decorations, cc.cat_id, cc.retired, COALESCE(array_agg(ct.tag_id) FILTER (WHERE ct.tag_id IS NOT NULL), '{}') as tag_ids FROM cases cc LEFT JOIN case_tags ct ON cc.id = ct.case_id @@ -27,9 +30,9 @@ async function queryCasesByOrgId(orgId) { } async function queryCaseById(orgId, caseId) { - let client = getClient() + let client = await getClient() try { - const result = await client.query('SELECT * FROM cases WHERE id = $1 AND org_id = $2', [caseId, orgId]); + const result = await client.query('SELECT id, name, text, decorations, cat_id, retired FROM cases WHERE id = $1 AND org_id = $2', [caseId, orgId]); return result.rows[0]; } catch (error) { logger.error('Error querying test case by id:', error); @@ -38,12 +41,12 @@ async function queryCaseById(orgId, caseId) { } async function addCase(orgId, catId, typeId, name, text, decorations, tags = []) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); - const result = await client.query( - 'INSERT INTO cases (org_id, cat_id, type_id, name, text, decorations) VALUES ($1, $2, $3, $4, $5, $6) RETURNING *', + `INSERT INTO cases (org_id, cat_id, type_id, name, text, decorations) VALUES ($1, $2, $3, $4, $5, $6) + RETURNING id, cat_id, type_id, name, text, decorations, retired`, [orgId, catId, typeId, name, text, decorations] ); const caseId = result.rows[0].id; @@ -54,8 +57,11 @@ async function addCase(orgId, catId, typeId, name, text, decorations, tags = []) [caseId, tags] ); } - await client.query('COMMIT'); + + const retired = result.rows[0].retired; + await addCaseHistory(orgId, caseId, text, decorations, retired) + return result.rows[0]; } catch (error) { await client.query('ROLLBACK'); @@ -65,14 +71,28 @@ async function addCase(orgId, catId, typeId, name, text, decorations, tags = []) } async function updateCase(orgId, catId, caseId, typeId, name, text, decorations, tags = []) { - let client = getClient() + let client = await getClient() + + let og_text + let og_decorations + + try { + const result = await client.query('SELECT text as og_text, decorations as og_decorations FROM cases WHERE id = $1 AND org_id = $2', [caseId, orgId]); + ({ og_text, og_decorations } = result.rows[0]); + } catch (error) { + logger.error('Cannot complete update. Error querying test case by id:', error); + throw error; + } + try { await client.query('BEGIN'); const result = await client.query( - 'UPDATE cases SET org_id = $1, cat_id = $2, name = $3, text = $4, type_id = $5, decorations = $6 WHERE id = $7 RETURNING *', + `UPDATE cases SET cat_id = $2, name = $3, text = $4, type_id = $5, decorations = $6 WHERE id = $7 AND org_id = $1 + RETURNING id, cat_id, name, text, type_id, decorations, retired`, [orgId, catId, name, text, typeId, decorations, caseId ] ); + const retired = result.rows[0].retired; await client.query('DELETE FROM case_tags WHERE case_id = $1', [caseId]); if (tags.length > 0) { @@ -82,7 +102,10 @@ async function updateCase(orgId, catId, caseId, typeId, name, text, decorations, ); } + await addCaseHistory(orgId, caseId, name, text, og_text, decorations, og_decorations, retired) + await client.query('COMMIT'); + return result.rows[0]; } catch (error) { await client.query('ROLLBACK'); @@ -92,7 +115,7 @@ async function updateCase(orgId, catId, caseId, typeId, name, text, decorations, } async function deleteCase(orgId, caseId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -101,7 +124,7 @@ async function deleteCase(orgId, caseId) { // Then remove the case itself const result = await client.query( - 'DELETE FROM cases WHERE id = $1 AND org_id = $2 RETURNING *', + 'DELETE FROM cases WHERE id = $1 AND org_id = $2 RETURNING id, cat_id, name, text, type_id, decorations, retired', [caseId, orgId] ); @@ -115,10 +138,10 @@ async function deleteCase(orgId, caseId) { } async function retireCase(orgId, caseId) { - let client = getClient() + let client = await getClient() try { const result = await client.query( - 'UPDATE cases SET retired = true WHERE id = $1 AND org_id = $2 RETURNING *', + 'UPDATE cases SET retired = true WHERE id = $1 AND org_id = $2 RETURNING id, cat_id, name, text, type_id, decorations, retired', [caseId, orgId] ) return result.rows[0]; @@ -128,12 +151,281 @@ async function retireCase(orgId, caseId) { } } +// --------------------------------------------------------------------------------------------------------------------- +// - CASE BUG + +async function addCaseBug(orgId, caseId, status, description, info, type, priority, reporter, link) { + let client = await getClient(); + try { + const result = await client.query( + `INSERT INTO case_bugs ( + org_id, case_id, status, description, info, type, priority, reporter, link, entry_date, update_date + ) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, CURRENT_TIMESTAMP, CURRENT_TIMESTAMP) + RETURNING id, case_id, status, description, info, type, priority, reporter, link, entry_date, update_date`, + [orgId, caseId, status, description, info, type, priority, reporter, link] + ); + logger.info(`Added case bug with id: ${result.rows[0].id}`); + return result.rows[0]; + } catch (error) { + logger.error('Error adding case bug:', error); + throw error; + } +} + + +async function updateCaseBug(orgId, id, status, description, info, type, priority, link) { + let client = await getClient(); + try { + const result = await client.query( + `UPDATE case_bugs SET org_id = $1, status = $2, description = $3, info = $4, type = $5, priority = $6, link = $7, update_date = CURRENT_TIMESTAMP + WHERE id = $8 AND org_id = $9 + RETURNING id, case_id, status, description, info, type, priority, reporter, link, entry_date, update_date`, + [orgId, status, description, info, type, priority, link, id, orgId] + ); + logger.info(`Added case bug with id: ${result.rows[0].id}`); + return result.rows[0]; + } catch (error) { + logger.error('Error adding case bug:', error); + throw error; + } +} + +async function queryCaseBugById(orgId, id) { + let client = await getClient(); + try { + const result = await client.query('SELECT id, case_id, status, description, info, type, priority, reporter, link, entry_date, update_date FROM case_bugs WHERE id = $1 AND org_id = $2', [id, orgId]); + return result.rows[0]; + } catch (error) { + logger.error('Error querying case bug by id:', error); + throw error; + } +} + +async function queryCaseBugByCaseId(orgId, caseId) { + let client = await getClient(); + try { + const result = await client.query('SELECT id, case_id, status, description, info, type, priority, reporter, link, entry_date, update_date FROM case_bugs WHERE case_id = $1 AND org_id = $2', [caseId, orgId]); + return result.rows; + } catch (error) { + logger.error('Error querying case bug by id:', error); + throw error; + } +} + +async function deleteCaseBug(orgId, id) { + let client = await getClient(); + try { + const result = await client.query('DELETE FROM case_bugs WHERE id = $1 AND org_id = $2 RETURNING id', [id, orgId]); + + if (result.rows.length === 0) { + throw new Error(`Case bug with id ${id} not found`); + } + logger.info(`Removed case bug with id: ${id}`); + return result.rows[0]; + } catch (error) { + logger.error('Error removing case bug:', error); + throw error; + } +} + +// --------------------------------------------------------------------------------------------------------------------- +// - CASE HISTORY + +function generateDiff(oldText, newText) { + const oldLines = oldText.split('\n'); + const newLines = newText.split('\n'); + + const operations = []; + let oldIndex = 0; + let newIndex = 0; + + while (oldIndex < oldLines.length || newIndex < newLines.length) { + // If we've reached the end of old lines, remaining are insertions + if (oldIndex >= oldLines.length) { + const insertCount = newLines.length - newIndex; + operations.push({ + type: 'insert', + oldStart: oldLines.length, + newStart: newIndex + 1, + newCount: insertCount, + lines: newLines.slice(newIndex) + }); + break; + } + + // If we've reached the end of new lines, remaining are deletions + if (newIndex >= newLines.length) { + const deleteCount = oldLines.length - oldIndex; + operations.push({ + type: 'delete', + oldStart: oldIndex + 1, + oldCount: deleteCount, + lines: oldLines.slice(oldIndex) + }); + break; + } + + // If lines are the same, retain them + if (oldLines[oldIndex] === newLines[newIndex]) { + const retainStart = oldIndex; + + // Count consecutive matching lines + while (oldIndex < oldLines.length && + newIndex < newLines.length && + oldLines[oldIndex] === newLines[newIndex]) { + oldIndex++; + newIndex++; + } + + operations.push({ + type: 'retain', + oldStart: retainStart + 1, + count: oldIndex - retainStart, + lines: oldLines.slice(retainStart, oldIndex) + }); + continue; + } + + // Lines are different - find the extent of the change + const changeStartOld = oldIndex; + const changeStartNew = newIndex; + + // Look ahead to find where lines sync up again + let syncFound = false; + let oldEnd = oldIndex; + let newEnd = newIndex; + + // Simple approach: look for the next matching line + for (let i = oldIndex + 1; i < oldLines.length && !syncFound; i++) { + for (let j = newIndex + 1; j < newLines.length; j++) { + if (oldLines[i] === newLines[j]) { + oldEnd = i; + newEnd = j; + syncFound = true; + break; + } + } + } + + if (!syncFound) { + // No sync point found, treat rest as change + oldEnd = oldLines.length; + newEnd = newLines.length; + } + + // Extract changed sections + const deletedLines = oldLines.slice(oldIndex, oldEnd); + const insertedLines = newLines.slice(newIndex, newEnd); + + // Add operations + if (deletedLines.length > 0) { + operations.push({ + type: 'delete', + oldStart: oldIndex + 1, + oldCount: deletedLines.length, + lines: deletedLines + }); + } + + if (insertedLines.length > 0) { + operations.push({ + type: 'insert', + oldStart: oldIndex, + newStart: newIndex + 1, + newCount: insertedLines.length, + lines: insertedLines + }); + } + + oldIndex = oldEnd; + newIndex = newEnd; + } + + return formatDiff(operations); +} + +function formatDiff(operations) { + if (operations.length === 0) { + return ''; + } + + const result = []; + + for (const op of operations) { + if (op.type === 'retain') { + for (const line of op.lines) { + result.push(`|^^${line}`); + } + } else if (op.type === 'delete') { + //const range = op.oldCount === 1 + // ? `${op.oldStart}d${op.oldStart - 1}` + // : `${op.oldStart},${op.oldStart + op.oldCount - 1}d${op.oldStart - 1}`; + //result.push(range); + + for (const line of op.lines) { + result.push(`|<<${line}`); + } + } else if (op.type === 'insert') { + //const range = op.newCount === 1 + // ? `${op.oldStart}a${op.newStart}` + // : `${op.oldStart}a${op.newStart},${op.newStart + op.newCount - 1}`; + //result.push(range); + + for (const line of op.lines) { + result.push(`|>>${line}`); + } + } + } + + return result.join('\n'); +} + +async function addCaseHistory(orgId, caseId, name, text, og_text, decorations, og_decorations, retired) { + + const textDiff = generateDiff(og_text || '', text || ''); + + // Generate diff for decorations (assuming they're JSON objects) + const ogDecorationsFormatted = (og_decorations || '').split(' ').filter(token => token.trim()).join('\n'); + const decorationsFormatted = (decorations || '').split(' ').filter(token => token.trim()).join('\n'); + const decorationsDiff = generateDiff(ogDecorationsFormatted, decorationsFormatted); + + let client = await getClient() + try { + const result = await client.query( + `INSERT INTO case_history (case_id, case_text, decorations, retired, org_id, change_date) + VALUES ($1, $2, $3, $4, $5, CURRENT_TIMESTAMP) + RETURNING id, case_id, case_text, decorations, retired, change_date`, + [caseId, textDiff, decorationsDiff, retired, orgId] + ); + return result.rows[0]; + } catch (error) { + logger.error('Error adding case history:', error); + throw error; + } +} + +async function queryCaseHistoryByCaseId(orgId, caseId) { + let client = await getClient() + try { + const result = await client.query( + 'SELECT id, case_id, case_text, decorations, retired, change_date FROM case_history WHERE case_id = $1 AND org_id = $2 ORDER BY change_date DESC', + [caseId, orgId] + ); + return result.rows; + } catch (error) { + logger.error('Error querying case history by case id:', error); + throw error; + } +} + +// --------------------------------------------------------------------------------------------------------------------- +// - FLOW async function queryAllFlowsByOrgId(orgId, categoryId) { - let client = getClient() + let client = await getClient() try { let query = ` - SELECT cc.*, + SELECT cc.id, cc.name, cc.narrative, cc.decorations, cc.cat_id, cc.retired, cc.cids, COALESCE(array_agg(ct.tag_id) FILTER (WHERE ct.tag_id IS NOT NULL), '{}') as tag_ids FROM flows cc LEFT JOIN flow_tags ct ON cc.id = ct.flow_id @@ -157,11 +449,10 @@ async function queryAllFlowsByOrgId(orgId, categoryId) { } } - async function queryFlowById(orgId, flowId) { - let client = getClient() + let client = await getClient() try { - const result = await client.query('SELECT * FROM flows WHERE id = $1 AND org_id = $2', [flowId, orgId]); + const result = await client.query('SELECT id, name, narrative, decorations, cat_id, cids, retired FROM flows WHERE id = $1 AND org_id = $2', [flowId, orgId]); return result.rows[0]; } catch (error) { logger.error('Error querying flow by id:', error); @@ -170,10 +461,10 @@ async function queryFlowById(orgId, flowId) { } async function addFlow(orgId, catId, name, narrative, decorations, cids = [], tag_ids = []) { - let client = getClient() + let client = await getClient() try { const result = await client.query( - 'INSERT INTO flows (org_id, cat_id, name, narrative, decorations, cids) VALUES ($1, $2, $3, $4, $5, $6) RETURNING *', + 'INSERT INTO flows (org_id, cat_id, name, narrative, decorations, cids) VALUES ($1, $2, $3, $4, $5, $6) RETURNING id, name, narrative, decorations, cat_id, cids, retired', [orgId, catId, name, narrative, decorations, cids] ); const flowId = result.rows[0].id; @@ -193,12 +484,13 @@ async function addFlow(orgId, catId, name, narrative, decorations, cids = [], ta } async function updateFlow(orgId, catId, flowId, name, narrative, decorations, cids = [], tag_ids = []) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); const result = await client.query( - 'UPDATE flows SET org_id = $1, cat_id = $2, name = $3, narrative = $4, cids = $5, decorations = $6 WHERE id = $7 RETURNING *', + 'UPDATE flows SET org_id = $1, cat_id = $2, name = $3, narrative = $4, cids = $5, decorations = $6 WHERE id = $7 ' + + 'RETURNING id, name, narrative, decorations, cat_id, cids, retired', [orgId, catId, name, narrative, cids, decorations, flowId] ); @@ -220,7 +512,7 @@ async function updateFlow(orgId, catId, flowId, name, narrative, decorations, ci } async function deleteFlow(orgId, flowId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -230,7 +522,7 @@ async function deleteFlow(orgId, flowId) { // Then remove the flow itself const result = await client.query( - 'DELETE FROM flows WHERE id = $1 AND org_id = $2 RETURNING *', + 'DELETE FROM flows WHERE id = $1 AND org_id = $2 RETURNING id, name, narrative, decorations, cat_id, cids, retired', [flowId, orgId] ); @@ -271,10 +563,10 @@ async function getFlowCaseNamesForIds(orgId, caseId, flowId) { } async function retireFlow(orgId, flowId) { - let client = getClient() + let client = await getClient() try { const result = await client.query( - 'UPDATE flows SET retired = true WHERE id = $1 AND org_id = $2 RETURNING *', + 'UPDATE flows SET retired = true WHERE id = $1 AND org_id = $2 RETURNING id, name, narrative, decorations, cat_id, cids, retired', [flowId, orgId] ) return result.rows[0]; @@ -291,6 +583,13 @@ export default { updateCase, deleteCase, retireCase, + addCaseBug, + updateCaseBug, + deleteCaseBug, + queryCaseBugById, + queryCaseBugByCaseId, + addCaseHistory, + queryCaseHistoryByCaseId, queryAllFlowsByOrgId, queryFlowById, addFlow, diff --git a/service/server/flowcase_rest.js b/service/server/flowcase_rest.js index a979c63ef1fe2119ae00d76157ae51b630af0831..7deaccb892a086d8319f26bba97f048db6d944ed 100644 --- a/service/server/flowcase_rest.js +++ b/service/server/flowcase_rest.js @@ -149,4 +149,91 @@ export async function addFlowCase(app) { } }); + // -- Case Bugs --------------------------------------------------------------------------------------------------- + + app.post('/api/casesbug', authenticateToken, async (req, res) => { + try { + const result = await proxy.addCaseBug(1, parseInt(req.body.case_id), parseInt(req.body.status), req.body.description, req.body.info, + parseInt(req.body.type), parseInt(req.body.priority), req.user.username, req.body.link); + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.put('/api/casesbug', authenticateToken, async (req, res) => { + try { + const result = await proxy.updateCaseBug(1, parseInt(req.body.id), parseInt(req.body.status), req.body.description, + req.body.info, parseInt(req.body.type), parseInt(req.body.priority), req.body.link); + if (!result) { + return res.status(404).json({error: 'Case bug not found'}); + } + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.get('/api/casesbug/id/:id', authenticateToken, async (req, res) => { + try { + const bugId = parseInt(req.params.id); + const bug = await proxy.queryCaseBugById(1, bugId); + if (!bug) { + return res.status(404).json({error: 'Case bug not found'}); + } + res.json(bug); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.get('/api/casesbug/case/:id', authenticateToken, async (req, res) => { + try { + const caseId = parseInt(req.params.id); + const bugs = await proxy.queryCaseBugByCaseId(1, caseId); + res.json(bugs); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.delete('/api/casesbug/id/:id', authenticateToken, async (req, res) => { + try { + const bugId = parseInt(req.params.id); + const result = await proxy.deleteCaseBug(1, bugId); + if (!result) { + return res.status(404).json({error: 'Case bug not found'}); + } + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + // -- Case History ------------------------------------------------------------------------------------------------ + + app.post('/api/cases/history/:caseId', authenticateToken, async (req, res) => { + try { + const caseId = parseInt(req.params.caseId); + const result = await proxy.addCaseHistory(orgId, caseId, req.body.name, req.body.text, req.body.og_text, req.body.decorations, req.body.og_decorations, req.body.retired,) + //orgId, caseId, name, text, og_text, decorations, og_decorations, retired + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.get('/api/cases/history/:caseId', authenticateToken, async (req, res) => { + try { + const caseId = parseInt(req.params.caseId); + const history = await proxy.queryCaseHistoryByCaseId(1, caseId); + res.json(history); + } catch (error) { + if (error.message.toLowerCase().includes('not found')) { + res.status(404).json({error: error.message}); + } else { + res.status(500).json({error: error.message}); + } + } + }); } \ No newline at end of file diff --git a/service/server/impex.js b/service/server/impex.js index 5d3317b96030a906dfb81a2191a4d825081d9ecf..993230ebcbe089c1f02784e4237f0a3fded5a6a7 100644 --- a/service/server/impex.js +++ b/service/server/impex.js @@ -11,7 +11,7 @@ import proxy from './proxy.js'; // This is the user export. It does not include runs. async function exportDatabaseByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { logger.info(`Starting database export for org_id: ${orgId}`); @@ -99,7 +99,7 @@ async function exportDatabaseByOrgId(orgId) { } async function importDatabaseFromJson(orgId, importData) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -362,7 +362,102 @@ async function importDatabaseFromJson(orgId, importData) { } } +// Tables to preserve (not clean) +const PROTECTED_TABLES = [ + 'organizations', + 'account_roles', + 'account_state', + 'accounts', + 'categories', + 'tags', + 'case_bug_status', + 'case_bug_types', + 'case_bug_priority', + 'run_statuses', + 'case_types' +]; + +async function wipeDatabase(orgId, pass) { + let client = await getClient(pass) + let connected = false + try { + + await client.query('BEGIN'); + connected = true + + await client.query(`DELETE FROM run_logs WHERE org_id = $1`, [orgId]); + logger.info(`Deleted run_logs for org_id: ${orgId}`); + + await client.query(`DELETE FROM runs WHERE org_id = $1`, [orgId]); + logger.info(`Deleted run_logs for org_id: ${orgId}`); + + await client.query(` + DELETE FROM design_tags + WHERE design_id IN (SELECT id FROM design WHERE org_id = $1) + `, [orgId]); + logger.info(`Deleted design_tags for org_id: ${orgId}`); + + await client.query(` + DELETE FROM flow_tags + WHERE flow_id IN (SELECT id FROM flows WHERE org_id = $1) + `, [orgId]); + logger.info(`Deleted flow_tags for org_id: ${orgId}`); + + await client.query(` + DELETE FROM case_tags + WHERE case_id IN (SELECT id FROM cases WHERE org_id = $1) + `, [orgId]); + logger.info(`Deleted case_tags for org_id: ${orgId}`); + + await client.query(`DELETE FROM design WHERE org_id = $1`, [orgId]); + logger.info(`Deleted designs for org_id: ${orgId}`); + + await client.query(`DELETE FROM flow_stats WHERE org_id = $1`, [orgId]); + logger.info(`Deleted flow_stats for org_id: ${orgId}`); + + await client.query(`DELETE FROM flows WHERE org_id = $1`, [orgId]); + logger.info(`Deleted flows for org_id: ${orgId}`); + + await client.query(`DELETE FROM case_stats WHERE org_id = $1`, [orgId]); + logger.info(`Deleted case_stats for org_id: ${orgId}`) + + await client.query(`DELETE FROM case_history WHERE org_id = $1`, [orgId]); + logger.info(`Deleted case_history for org_id: ${orgId}`); + + await client.query(`DELETE FROM case_bugs WHERE org_id = $1`, [orgId]); + logger.info(`Deleted case_bugs for org_id: ${orgId}`); + + await client.query(`DELETE FROM cases WHERE org_id = $1`, [orgId]); + logger.info(`Deleted cases for org_id: ${orgId}`); + + //await client.query(`DELETE FROM providers WHERE org_id = $1`, [orgId]); + //logger.info(`Deleted providers for org_id: ${orgId}`); + + //await client.query(`DELETE FROM tags WHERE org_id = $1`, [orgId]); + //logger.info(`Deleted tags for org_id: ${orgId}`); + + //await client.query(`DELETE FROM categories WHERE org_id = $1`, [orgId]); + ///logger.info(`Deleted categories for org_id: ${orgId}`); + + // Commit the transaction + await client.query('COMMIT'); + + logger.info('Database cleaning completed successfully'); + + } catch (error) { + if (connected === true) { + await client.query('ROLLBACK'); + } + logger.error('Error during database wipe:', error); + throw Error("Error during database wipe:"); + } finally { + await client.end(); + logger.info('Database connection closed'); + } +} + export default { exportDatabaseByOrgId, importDatabaseFromJson, + wipeDatabase }; \ No newline at end of file diff --git a/service/server/impex_rest.js b/service/server/impex_rest.js new file mode 100644 index 0000000000000000000000000000000000000000..63c43aa384b0ad4cedbe962b3d5f3797e4edce70 --- /dev/null +++ b/service/server/impex_rest.js @@ -0,0 +1,61 @@ +/* + * WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + * + * DB helpers. + */ +import proxy from "./proxy.js"; +import {authenticateToken, authenticateTokenOrToken, isAdminRole} from "./common.js"; + +export async function addImpex(app) { + + app.get('/api/export', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const result = await proxy.exportDatabaseByOrgId(1); + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + + app.post('/api/import', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const result = await proxy.importDatabaseFromJson(1, req.body); + res.json(result); + } catch (error) { + if (error.message.includes('bad org')) { + return res.status(400).json({error: "Organization in data does not match your organization."}); + } + res.status(500).json({error: error.message}); + } + }); + + app.get('/api/wipe', authenticateToken, async (req, res) => { + if (!isAdminRole(req.user.username)) { + return res.sendStatus(403); + } + try { + const password = req.headers['x-wipe-password']; + + if (!password) { + return res.status(400).json({error: "Password header (x-wipe-password) is required"}); + } + + await proxy.wipeDatabase(1, password); + res.json({message: "Database wiped."}); + } catch (error) { + if (error.message.includes("password authentication failed")) { + res.status(403).json({error: "password authentication failed"}); + } else { + res.status(500).json({error: "internal server error"}); + } + } + }); + + +} \ No newline at end of file diff --git a/service/server/main.js b/service/server/main.js index e192c67b5a666d08a4a6d6d118ccc90ee6f059cf..7f466d3b511c0270450125117d923a4d080a69d5 100644 --- a/service/server/main.js +++ b/service/server/main.js @@ -136,11 +136,13 @@ import { addAdmin } from './admin_rest.js' import { addRun } from './run_rest.js' import {addFlowCase} from "./flowcase_rest.js"; import {addStats} from "./stats_rest.js"; +import {addImpex} from "./impex_rest.js"; await addAdmin(app); await addRun(app); await addFlowCase(app) await addStats(app) +await addImpex(app) // LoginView endpoint app.post('/api/login', async (req, res) => { @@ -242,33 +244,6 @@ app.put('/api/categories/:categoryId', authenticateToken, async (req, res) => { } }); -app.get('/api/export', authenticateToken, async (req, res) => { - if (!isAdminRole(req.user.username)) { - return res.sendStatus(403); - } - try { - const result = await proxy.exportDatabaseByOrgId(1); - res.json(result); - } catch (error) { - res.status(500).json({error: error.message}); - } -}); - -app.post('/api/import', authenticateToken, async (req, res) => { - if (!isAdminRole(req.user.username)) { - return res.sendStatus(403); - } - try { - const result = await proxy.importDatabaseFromJson(1, req.body); - res.json(result); - } catch (error) { - if (error.message.includes('bad org')) { - return res.status(400).json({error: "Organization in data does not match your organization."}); - } - res.status(500).json({error: error.message}); - } -}); - app.get('/api/flowcasenames', authenticateToken, async (req, res) => { try { const { flow_id, case_id } = req.query; @@ -384,6 +359,11 @@ app.delete('/api/providers/:providerId', authenticateToken, async (req, res) => res.json({message: 'Provider deleted successfully'}); } catch (error) { logger.error('Delete provider error:', error); + // A design is still using it. + if (error.message && error.message.includes('fkdesign_prov_org_id')) { + return res.status(409).json({error: error.message}); + } + res.status(500).json({error: error.message}); } }); diff --git a/service/server/provider.js b/service/server/provider.js index 1552b37258152e9438684f0f24cd4cebf240f4b3..a65dc8be4724454528c7fff6c00b3c31f45b48f9 100644 --- a/service/server/provider.js +++ b/service/server/provider.js @@ -43,7 +43,7 @@ function decryptKey(encryptedKey) { async function createProvider(org_id, name, provider_type, url, model, auth_key) { - let client = getClient() + let client = await getClient() const encryptedKey = encryptKey(auth_key); const query = ` @@ -63,7 +63,7 @@ async function createProvider(org_id, name, provider_type, url, model, auth_key) } async function getProvider(org_id, id) { - let client = getClient() + let client = await getClient() const query = ` SELECT p.id, p.org_id, p.name, p.type, p.url, p.model @@ -80,7 +80,7 @@ async function getProvider(org_id, id) { } async function getProvidersByOrg(org_id) { - let client = getClient() + let client = await getClient() const query = ` SELECT p.id, p.org_id, p.name, p.type, p.url, p.model FROM providers p @@ -97,7 +97,7 @@ async function getProvidersByOrg(org_id) { } async function updateProvider(org_id, id, name, provider_type, url, model, key) { - let client = getClient() + let client = await getClient() const fieldsToUpdate = []; const values = []; let paramCounter = 1; @@ -123,7 +123,7 @@ async function updateProvider(org_id, id, name, provider_type, url, model, key) } // Only update key if it's provided and not empty - if (key !== undefined && key.trim() !== '') { + if (key !== undefined && key != null && key.trim() !== '') { const encryptedKey = encryptKey(key); fieldsToUpdate.push(`auth_key = $${paramCounter++}`); values.push(encryptedKey); @@ -152,7 +152,7 @@ async function updateProvider(org_id, id, name, provider_type, url, model, key) } async function deleteProvider(org_id, id) { - let client = getClient() + let client = await getClient() const query = 'DELETE FROM providers WHERE id = $1 AND org_id = $2'; try { diff --git a/service/server/proxy.js b/service/server/proxy.js index 4ed211e9685eeefdf511db4865d88ceda1c35bbb..d6d5727c15ec5515e7a871e4b3bb546db6e27432 100644 --- a/service/server/proxy.js +++ b/service/server/proxy.js @@ -35,6 +35,27 @@ async function changeUserPassword(orgId, username, oldPassword, newPassword) { return await admin.changeUserPassword(orgId, username, oldPassword, newPassword); } +export async function addToken(orgId, name, token, expireDate) { + return await admin.addToken(orgId, name, token, expireDate); +} + +export async function getTokensForOrg(orgId) { + return await admin.getTokensForOrg(orgId); +} + +async function getTokenInfo(orgId, token) { + return await admin.getTokenInfo(orgId, token); +} + +// Internal use. No API access. +async function getTokenInfoNoOrg(token) { + return await admin.getTokenInfoNoOrg(token); +} + +export async function deleteToken(orgId, tokenId) { + return await admin.deleteToken(orgId, tokenId); +} + // -- CatTags ---------------------------------------------------------------------------------------------------------- async function getAllTags(orgId) { @@ -141,7 +162,39 @@ async function retireCase(orgId, flowId) { return await fc.retireCase(orgId, flowId); } -// -- Flows ------------------------------------------------------------------------------------------------------------ +// -- Case Bug --------------------------------------------------------------------------------------------------------- + +async function addCaseBug(orgId, caseId, status, description, info, type, priority, reporter, link) { + return await fc.addCaseBug(orgId, caseId, status, description, info, type, priority, reporter, link); +} + +async function updateCaseBug(orgId, id, status, description, info, type, priority, link) { + return await fc.updateCaseBug(orgId, id, status, description, info, type, priority, link); +} + +async function queryCaseBugById(orgId, id) { + return await fc.queryCaseBugById(orgId, id); +} + +async function queryCaseBugByCaseId(orgId, caseId) { + return await fc.queryCaseBugByCaseId(orgId, caseId); +} + +async function deleteCaseBug(orgId, id) { + return await fc.deleteCaseBug(orgId, id); +} + +// -- Case History ----------------------------------------------------------------------------------------------------- + +async function addCaseHistory(orgId, caseId, name, text, og_text, decorations, og_decorations, retired) { + return await fc.addCaseHistory(orgId, caseId, name, text, og_text, decorations, og_decorations, retired); +} + +async function queryCaseHistoryByCaseId(orgId, caseId) { + return await fc.queryCaseHistoryByCaseId(orgId, caseId); +} + +// -- Flows ----------------------------------------------------------------------------------------------------------- async function getAllFlows(orgId, categoryId) { return await fc.queryAllFlowsByOrgId(orgId, categoryId); @@ -180,6 +233,10 @@ async function exportDatabaseByOrgId(orgId) { async function importDatabaseFromJson(orgId, jsonData) { return await impex.importDatabaseFromJson(orgId, jsonData); } +async function wipeDatabase(orgId, pass) { + return await impex.wipeDatabase(orgId, pass); +} + // -- Provider -------------------------------------------------------------------------------------------------------- @@ -205,14 +262,22 @@ async function deleteProvider(orgId, providerId) { // -- Stats ----------------------------------------------------------------------------------------------------------- +async function runReport(orgId, runId) { + return await stats.runReport(orgId, runId); +} + async function insertFlowStats(orgId, data) { - return await stats.insertCaseStats(orgId, data); + return await stats.insertFlowStats(orgId, data); } async function insertCaseStats(orgId, data) { return await stats.insertCaseStats(orgId, data); } +async function getCaseStats(caseId, orgId) { + return await stats.getCaseStats(caseId, orgId); +} + // -- Other ----------------------------------------------------------------------------------------------------------- async function closeConnection() { @@ -228,6 +293,13 @@ export default { updateCase, deleteCase, retireCase, + addCaseBug, + updateCaseBug, + queryCaseBugById, + queryCaseBugByCaseId, + deleteCaseBug, + addCaseHistory, + queryCaseHistoryByCaseId, addTag, removeTag, updateTag, @@ -253,6 +325,7 @@ export default { deleteDesign, exportDatabaseByOrgId, importDatabaseFromJson, + wipeDatabase, closeConnection, getFlowCaseNamesForIds, createProvider, @@ -266,5 +339,12 @@ export default { queryAccountById, changeUserPassword, insertFlowStats, - insertCaseStats + insertCaseStats, + getCaseStats, + runReport, + addToken, + getTokensForOrg, + getTokenInfo, + deleteToken, + getTokenInfoNoOrg }; \ No newline at end of file diff --git a/service/server/run.js b/service/server/run.js index bde0907f89dceb3acea46f26d7ecb983dbbf8e3d..85fd7ec84516d995b41bc033d87460a694b3bc35 100644 --- a/service/server/run.js +++ b/service/server/run.js @@ -4,20 +4,20 @@ * DB helpers. */ import {logger} from './store.js'; -import { getClient } from './client.js'; +import {getClient} from './client.js'; // ------------------------------------------------------------------------------------------------------------ // - DESIGN async function addDesign(orgId, name, narrative, values, host, provider_id, flowids = [], tag_ids = []) { - let client = getClient() + let client = await getClient() try { const normalizedProviderId = (provider_id === null || Number.isNaN(provider_id)) ? null : provider_id; await client.query('BEGIN'); const result = await client.query( - 'INSERT INTO design (org_id, name, narrative, values, host, flowids, provider_id) VALUES ($1, $2, $3, $4, $5, $6, $7) RETURNING *', + 'INSERT INTO design (org_id, name, narrative, values, host, flowids, provider_id) VALUES ($1, $2, $3, $4, $5, $6, $7) RETURNING id, name, narrative, values, host, flowids, provider_id', [orgId, name, narrative, values, host, flowids, normalizedProviderId] ); const designId = result.rows[0].id; @@ -39,10 +39,10 @@ async function addDesign(orgId, name, narrative, values, host, provider_id, flow } async function queryAllDesignsByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { let query = ` - SELECT cc.id, cc.name, cc.narrative, cc.values, cc.host, cc.org_id, cc.flowids, cc.provider_id, + SELECT cc.id, cc.name, cc.narrative, cc.values, cc.host, cc.flowids, cc.provider_id, COALESCE(array_agg(ct.tag_id) FILTER (WHERE ct.tag_id IS NOT NULL), '{}') as tag_ids FROM design cc LEFT JOIN design_tags ct ON cc.id = ct.design_id @@ -60,15 +60,15 @@ async function queryAllDesignsByOrgId(orgId) { } async function queryDesignById(designId) { - let client = getClient() + let client = await getClient() try { let query = ` - SELECT cc.id, cc.name, cc.narrative, cc.host, cc.design_status, cc.org_id, cc.flowids, cc.provider_id, + SELECT cc.id, cc.name, cc.narrative, cc.host, cc.design_status, cc.flowids, cc.provider_id, COALESCE(array_agg(ct.tag_id) FILTER (WHERE ct.tag_id IS NOT NULL), '{}') as tag_ids FROM design cc LEFT JOIN design_tags ct ON cc.id = ct.design_id WHERE cc.id = $1 - GROUP BY cc.id, cc.name, cc.narrative, cc.org_id`; + GROUP BY cc.id, cc.name, cc.narrative`; const params = [designId]; const result = await client.query(query, params); @@ -81,7 +81,7 @@ async function queryDesignById(designId) { } async function updateDesign(orgId, designId, name, narrative, values, host, provider_id, flowids = [], tag_ids = []) { - let client = getClient() + let client = await getClient() const normalizedProviderId = (provider_id === null || Number.isNaN(provider_id)) ? null : provider_id; @@ -89,7 +89,8 @@ async function updateDesign(orgId, designId, name, narrative, values, host, prov await client.query('BEGIN'); const result = await client.query( - 'UPDATE design SET name = $1, narrative = $2, values = $3, host = $4, provider_id = $5, flowids = $6 WHERE id = $7 AND org_id = $8 RETURNING *', + `UPDATE design SET name = $1, narrative = $2, values = $3, host = $4, provider_id = $5, flowids = $6 WHERE id = $7 AND org_id = $8 RETURNING + id, name, narrative, values, host, provider_id, flowids`, [name, narrative, values, host, normalizedProviderId, flowids, designId, orgId] ); @@ -113,7 +114,7 @@ async function updateDesign(orgId, designId, name, narrative, values, host, prov } async function deleteDesign(orgId, designId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -121,7 +122,7 @@ async function deleteDesign(orgId, designId) { await client.query('DELETE FROM design_tags WHERE design_id = $1', [designId]); // Delete the design - const result = await client.query('DELETE FROM design WHERE id = $1 AND org_id = $2 RETURNING *', [designId, orgId]); + const result = await client.query('DELETE FROM design WHERE id = $1 AND org_id = $2 RETURNING name, narrative, values, host, flowids, provider_id', [designId, orgId]); await client.query('COMMIT'); return result.rows[0] || null; @@ -137,11 +138,12 @@ async function deleteDesign(orgId, designId) { // - RUNS async function addRun(orgId, name, narrative, values, host, provider_id, flowids = [], tag_ids = []) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); const result = await client.query( - 'INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING *', + `INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING + id, name, narrative, values, host, run_status, flowids, provider_id`, [orgId, name, narrative, values, host, 1, flowids, provider_id] ); const runId = result.rows[0].id; @@ -154,7 +156,9 @@ async function addRun(orgId, name, narrative, values, host, provider_id, flowids } await client.query('COMMIT'); - return result.rows[0]; + + return {...result.rows[0], tag_ids}; + } catch (error) { await client.query('ROLLBACK'); logger.error('Error inserting run:', error); @@ -163,16 +167,16 @@ async function addRun(orgId, name, narrative, values, host, provider_id, flowids } async function queryAllRunsByOrgId(orgId) { - let client = getClient() + let client = await getClient() try { // TODO we don't need to get everything. let query = ` - SELECT cc.id, cc.name, cc.narrative, cc.values, cc.host, cc.run_status, cc.org_id, cc.flowids, cc.provider_id, + SELECT cc.id, cc.name, cc.narrative, cc.values, cc.host, cc.run_status, cc.flowids, cc.provider_id, COALESCE(array_agg(ct.tag_id) FILTER (WHERE ct.tag_id IS NOT NULL), '{}') as tag_ids FROM runs cc LEFT JOIN run_tags ct ON cc.id = ct.run_id WHERE cc.org_id = $1 - GROUP BY cc.id, cc.name, cc.narrative, cc.org_id`; + GROUP BY cc.id, cc.name, cc.narrative`; const params = [orgId]; const result = await client.query(query, params); @@ -185,7 +189,7 @@ async function queryAllRunsByOrgId(orgId) { } async function queryRunById(orgId, runId) { - let client = getClient() + let client = await getClient() try { const result = await client.query(` SELECT @@ -213,9 +217,9 @@ async function queryRunById(orgId, runId) { async function cancelRun(orgId, runId) { - let client = getClient() + let client = await getClient() try { - const result = await client.query('UPDATE runs SET run_status = 5 AND last_update = CURRENT_TIMESTAMP WHERE org_id = $1 AND run_id = $2 AND run_status < 5', [orgId, runId]); + const result = await client.query('UPDATE runs SET run_status = 5, last_update = CURRENT_TIMESTAMP WHERE org_id = $1 AND id = $2 AND run_status < 5 RETURNING id, name', [orgId, runId]); return result.rows[0]; } catch (error) { logger.error('Error cancelling run:', error); @@ -224,7 +228,7 @@ async function cancelRun(orgId, runId) { } async function getRunLog(orgId, runId) { - let client = getClient() + let client = await getClient() try { const query = ` SELECT log FROM run_logs @@ -249,7 +253,7 @@ async function getRunLog(orgId, runId) { } async function deleteRun(orgId, runId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -297,7 +301,7 @@ async function deleteRun(orgId, runId) { } async function deleteRunHistory(orgId, runId) { - let client = getClient() + let client = await getClient() try { await client.query('BEGIN'); @@ -310,7 +314,7 @@ async function deleteRunHistory(orgId, runId) { await client.query('DELETE FROM run_logs WHERE run_id = $1', [runId]); const result = await client.query( - 'DELETE FROM runs_history WHERE id = $1 AND org_id = $2 RETURNING *', + 'DELETE FROM runs_history WHERE id = $1 AND org_id = $2 RETURNING id', [runId, orgId] ); diff --git a/service/server/run_rest.js b/service/server/run_rest.js index 078887f67f66b09ac96e96be8734f1b96c47959e..4ca5a281517450873c6f18843e8404a839cab7f6 100644 --- a/service/server/run_rest.js +++ b/service/server/run_rest.js @@ -7,12 +7,12 @@ import {logger} from './store.js'; import { getClient } from './client.js'; import proxy from "./proxy.js"; -import { isAdminRole, authenticateToken } from "./common.js"; +import {isAdminRole, authenticateToken, authenticateTokenOrToken} from "./common.js"; export async function addRun(app) { // -- Design ------------------------------------------------------------------------------------------------------- - app.get('/api/design', authenticateToken, async (req, res) => { + app.get('/api/design', authenticateTokenOrToken, async (req, res) => { try { const runs = await proxy.getAllDesign(1); res.json(runs); @@ -21,9 +21,9 @@ export async function addRun(app) { } }); - app.get('/api/design/:designId', authenticateToken, async (req, res) => { + app.get('/api/design/:designId', authenticateTokenOrToken, async (req, res) => { try { - const run = await proxy.getRun(1, parseInt(req.params.designId)); + const run = await proxy.getDesign(1, parseInt(req.params.designId)); if (!run) { return res.status(404).json({error: 'DEsign not found'}); } @@ -89,7 +89,7 @@ export async function addRun(app) { } }); - app.get('/api/runs/:runId', authenticateToken, async (req, res) => { + app.get('/api/runs/:runId', authenticateTokenOrToken, async (req, res) => { try { const run = await proxy.getRun(1, parseInt(req.params.runId)); if (!run) { @@ -101,7 +101,7 @@ export async function addRun(app) { } }); - app.post('/api/runs', authenticateToken, async (req, res) => { + app.post('/api/runs', authenticateTokenOrToken, async (req, res) => { try { const result = await proxy.addRun(1, req.body.name, req.body.narrative, req.body.values, req.body.host, parseInt(req.body.provider_id), req.body.flowids, req.body.tag_ids); @@ -115,7 +115,7 @@ export async function addRun(app) { } }); - app.delete('/api/runs/:runId', authenticateToken, async (req, res) => { + app.delete('/api/runs/:runId', authenticateTokenOrToken, async (req, res) => { try { const result = await proxy.deleteRun(1, parseInt(req.params.runId)); if (!result) { @@ -139,7 +139,7 @@ export async function addRun(app) { } }); - app.get('/api/cancel/:runId', authenticateToken, async (req, res) => { + app.get('/api/cancel/:runId', authenticateTokenOrToken, async (req, res) => { try { const run = await proxy.cancelRun(1, parseInt(req.params.runId)); if (!run) { @@ -151,7 +151,7 @@ export async function addRun(app) { } }); - app.get('/api/run-logs/:runId', async (req, res) => { + app.get('/api/run-logs/:runId', authenticateTokenOrToken, async (req, res) => { try { const { runId } = req.params; diff --git a/service/server/stats.js b/service/server/stats.js index d8a6922380386312eb79c7a6a4c685a4635113e0..afbc40f871d88762e37526f0ba7e6ac90879d1db 100644 --- a/service/server/stats.js +++ b/service/server/stats.js @@ -6,12 +6,272 @@ import {logger} from './store.js'; import { getClient } from './client.js'; +// ------------------------------------------------------------------------------------------------------------ +// - REPORT + +function extractCostFields(d) { + let c_cost_ic = 0.0, c_cost_oc = 0.0; + let c_cost_it = 0, c_cost_ot = 0, c_cost_cwit = 0, c_cost_crit = 0; + + // Check if "costs" exists in the object + if (!d.hasOwnProperty("costs")) { + return [c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit]; + } + + // Extract costs as a string + const costsString = d["costs"]; + if (typeof costsString !== "string") { + return [c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit]; + } + + // Tokenize the string on whitespace + const tokens = costsString.split(/\s+/).filter(token => token.length > 0); + + // Match tokens to variables + for (const token of tokens) { + const parts = token.split(":"); + if (parts.length !== 2) { + continue; + } + + const key = parts[0]; + const value = parts[1]; + + switch (key) { + case "inputCost": + const inputCost = parseFloat(value); + if (!isNaN(inputCost)) { + c_cost_ic = inputCost; + } + break; + case "outputCost": + const outputCost = parseFloat(value); + if (!isNaN(outputCost)) { + c_cost_oc = outputCost; + } + break; + case "inputTokens": + const inputTokens = parseFloat(value); + if (!isNaN(inputTokens)) { + c_cost_it = Math.floor(inputTokens); + } + break; + case "outputTokens": + const outputTokens = parseFloat(value); + if (!isNaN(outputTokens)) { + c_cost_ot = Math.floor(outputTokens); + } + break; + case "cacheWriteInputTokens": + const cacheWriteInputTokens = parseFloat(value); + if (!isNaN(cacheWriteInputTokens)) { + c_cost_cwit = Math.floor(cacheWriteInputTokens); + } + break; + case "cacheReadInputTokens": + const cacheReadInputTokens = parseFloat(value); + if (!isNaN(cacheReadInputTokens)) { + c_cost_crit = Math.floor(cacheReadInputTokens); + } + break; + } + } + + return [c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit]; +} + +async function runReport(orgId, runId) { + const client = await getClient(); + + try { + logger.info(`Generating report for org ${orgId}, run ${runId}`); + + const report = { + runId: runId, + orgId: orgId, + generatedAt: new Date().toISOString(), + caseStats: { + totalCases: 0, + statusBreakdown: {}, + totalTimeMs: 0 + }, + flowStats: { + totalFlows: 0, + statusBreakdown: {}, + totalTimeMs: 0 + }, + runInfo: {} + }; + + // Get run information and orders data + const runQuery = ` + SELECT r.id, r.name, r.run_status, r.narrative, r.host, r.org_id, r.provider_id, + r.flowids, r.values, r.orders, r.start_date, r.last_update, r.exec_info, + r.exec_current_flow, r.exec_current_case, r.target_node, rs.name as status_name + FROM runs r + LEFT JOIN run_statuses rs ON r.run_status = rs.id + WHERE r.id = $1 AND r.org_id = $2 + UNION + SELECT rh.id, rh.name, rh.run_status, rh.narrative, rh.host, rh.org_id, rh.provider_id, + rh.flowids, rh.values, rh.orders, rh.start_date, rh.last_update, rh.exec_info, + rh.exec_current_flow, rh.exec_current_case, rh.target_node, rs.name as status_name + FROM runs_history rh + LEFT JOIN run_statuses rs ON rh.run_status = rs.id + WHERE rh.id = $1 AND rh.org_id = $2 + `; + + const runResult = await client.query(runQuery, [runId, orgId]); + + if (runResult.rows.length === 0) { + throw new Error(`Run ${runId} not found for organization ${orgId}`); + } + + const runData = runResult.rows[0]; + + // Set basic run info + report.runInfo = { + name: runData.name, + status: runData.status_name, + startDate: runData.start_date, + lastUpdate: runData.last_update, + host: runData.host, + narrative: runData.narrative, + totalRunTimeMs: runData.last_update && runData.start_date ? + new Date(runData.last_update) - new Date(runData.start_date) : 0 + }; + + // Parse orders JSON + let ordersData; + try { + ordersData = runData.orders ? JSON.parse(runData.orders) : null; + } catch (parseError) { + logger.error('Error parsing orders JSON:', parseError); + ordersData = null; + } + + let f_cost_ic = 0.0; // float cost input cost + let f_cost_oc = 0.0; // float cost output cost + let f_cost_it = 0; // float cost input tokens + let f_cost_ot = 0; // float cost output tokens + let f_cost_cwit = 0; // float cost cache write input tokens + let f_cost_crit = 0; // float cost cache read input tokens + + + if (ordersData && ordersData.flows && Array.isArray(ordersData.flows)) { + // Process flow statistics + const flowStatusCounts = {}; + const flowTimeTotals = {}; + let totalFlows = 0; + let totalFlowTime = 0; + + // Process case statistics + const caseStatusCounts = {}; + const caseTimeTotals = {}; + let totalCases = 0; + let totalCaseTime = 0; + + ordersData.flows.forEach(flow => { + totalFlows++; + + // Count flow status + const flowStatus = flow.result || 'unknown'; + flowStatusCounts[flowStatus] = (flowStatusCounts[flowStatus] || 0) + 1; + + // Sum flow time (if available) + const flowTime = flow.time || 0; + totalFlowTime += flowTime; + if (!flowTimeTotals[flowStatus]) { + flowTimeTotals[flowStatus] = { total: 0, count: 0, times: [] }; + } + flowTimeTotals[flowStatus].total += flowTime; + flowTimeTotals[flowStatus].count++; + flowTimeTotals[flowStatus].times.push(flowTime); + + // Process cases within this flow + if (flow.cases && Array.isArray(flow.cases)) { + flow.cases.forEach(testCase => { + totalCases++; + + // Count case status + const caseStatus = testCase.result || 'unknown'; + caseStatusCounts[caseStatus] = (caseStatusCounts[caseStatus] || 0) + 1; + + // Sum case time (if available) + const caseTime = testCase.time || 0; + totalCaseTime += caseTime; + if (!caseTimeTotals[caseStatus]) { + caseTimeTotals[caseStatus] = { total: 0, count: 0, times: [] }; + } + caseTimeTotals[caseStatus].total += caseTime; + caseTimeTotals[caseStatus].count++; + caseTimeTotals[caseStatus].times.push(caseTime); + + const [flow_ic, flow_oc, flow_it, flow_ot, flow_cwit, flow_crit] = extractCostFields(testCase); + f_cost_ic += flow_ic; + f_cost_oc += flow_oc; + f_cost_it += flow_it; + f_cost_ot += flow_ot; + f_cost_cwit += flow_cwit; + f_cost_crit += flow_crit; + }); + } + }); + + // Build flow stats breakdown + report.flowStats.totalFlows = totalFlows; + report.flowStats.totalTimeMs = totalFlowTime; + + Object.keys(flowStatusCounts).forEach(status => { + const times = flowTimeTotals[status]?.times || [0]; + report.flowStats.statusBreakdown[status] = { + count: flowStatusCounts[status], + totalTimeMs: flowTimeTotals[status]?.total || 0, + avgTimeMs: times.length > 0 ? (flowTimeTotals[status]?.total || 0) / times.length : 0, + minTimeMs: Math.min(...times), + maxTimeMs: Math.max(...times) + }; + }); + + // Build case stats breakdown + report.caseStats.totalCases = totalCases; + report.caseStats.totalTimeMs = totalCaseTime; + + Object.keys(caseStatusCounts).forEach(status => { + const times = caseTimeTotals[status]?.times || [0]; + report.caseStats.statusBreakdown[status] = { + count: caseStatusCounts[status], + totalTimeMs: caseTimeTotals[status]?.total || 0, + avgTimeMs: times.length > 0 ? (caseTimeTotals[status]?.total || 0) / times.length : 0, + minTimeMs: Math.min(...times), + maxTimeMs: Math.max(...times) + }; + }); + + report.costInfo = { + total_input_cost: f_cost_ic, + total_output_cost: f_cost_oc, + total_input_tokens: f_cost_it, + total_output_tokens: f_cost_ot, + total_cache_write_input_tokens: f_cost_cwit, + total_cache_read_input_tokens: f_cost_crit, + } + } + + logger.info(`Report generated successfully for run ${runId}: ${report.caseStats.totalCases} cases, ${report.flowStats.totalFlows} flows`); + return report; + + } catch (error) { + logger.error('Error generating run report:', error); + throw error; + } +} + // ------------------------------------------------------------------------------------------------------------ // - STATS async function insertCaseStats(orgId, data) { - let client = getClient() + let client = await getClient() try { // Parse JSON string if data is a string if (typeof data === 'string') { @@ -51,7 +311,7 @@ async function insertCaseStats(orgId, data) { } async function insertFlowStats(orgId, data) { - let client = getClient() + let client = await getClient() try { // Parse JSON string if data is a string if (typeof data === 'string') { @@ -77,7 +337,6 @@ async function insertFlowStats(orgId, data) { `; const result = await client.query(query, [JSON.stringify(dataWithOrgId)]); - logger.info(`Successfully inserted ${result.rowCount} flow stats records`); return { success: true, @@ -90,8 +349,31 @@ async function insertFlowStats(orgId, data) { } } +async function getCaseStats(orgId, caseId) { + const client = await getClient(); + + try { + + const query = ` + SELECT * FROM case_stats + WHERE case_id = $1 AND org_id = $2 + ORDER BY entry_timestamp ASC + `; + + const result = await client.query(query, [caseId, orgId]); + return result.rows; + + } catch (error) { + logger.error(`Error getting case stats for case ${caseId}, org ${orgId}:`, error); + throw error; + } +} + + export default { insertCaseStats, - insertFlowStats + insertFlowStats, + runReport, + getCaseStats }; \ No newline at end of file diff --git a/service/server/stats_rest.js b/service/server/stats_rest.js index f39e8c38c528a5a546929770202c86f0d4bfa5a2..b4d52cf3e2460c2dec9a2073597d7f9bd7abcd6b 100644 --- a/service/server/stats_rest.js +++ b/service/server/stats_rest.js @@ -3,14 +3,28 @@ * * DB helpers. */ -import {logger} from './store.js'; -import { getClient } from './client.js'; - import proxy from "./proxy.js"; -import { isAdminRole, authenticateToken } from "./common.js"; +import {authenticateToken, authenticateTokenOrToken} from "./common.js"; export async function addStats(app) { + // -- Report ------------------------------------------------------------------------------------------------------ + + app.get('/api/stats/report/:runId', authenticateTokenOrToken, async (req, res) => { + try { + const runId = parseInt(req.params.runId); + const result = await proxy.runReport(1, runId); // Using org_id = 1 for now + res.json(result); + } catch (error) { + if (error.message && error.message.toLowerCase().includes('not found')) { + res.status(404).json({error: error.message}); + } else { + res.status(500).json({error: error.message}); + } + } + }); + + // -- Stats ------------------------------------------------------------------------------------------------------- app.post('/api/stats/flow', authenticateToken, async (req, res) => { @@ -30,5 +44,18 @@ export async function addStats(app) { res.status(500).json({error: error.message}); } }); - + + app.get('/api/stats/case/:id', authenticateToken, async (req, res) => { + try { + const caseId = parseInt(req.params.id) + const result = await proxy.getCaseStats(1, caseId); + if (!result) { + return res.status(404).json({error: 'Case not found'}); + } + res.json(result); + } catch (error) { + res.status(500).json({error: error.message}); + } + }); + } \ No newline at end of file diff --git a/service/server/test/clean.js b/service/server/test/clean.js new file mode 100644 index 0000000000000000000000000000000000000000..60c3f76f1365e0bf1b61b13fee6a39f7e56b1346 --- /dev/null +++ b/service/server/test/clean.js @@ -0,0 +1,104 @@ +/* + * WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + * + * Database cleaner - removes data while preserving core tables + */ + +import {getClient} from "../client.js"; + +// NOTE: call this with the password for the postgres user. +const client = await getClient(process.argv[2]); + +// Tables to preserve (not clean) +const PROTECTED_TABLES = [ + 'organizations', + 'account_roles', + 'account_state', + 'accounts', + 'categories', + 'tags', + 'case_bug_status', + 'case_bug_types', + 'case_bug_priority', + 'run_statuses', + 'case_types', + 'provider_types' +]; + +async function cleanDatabase() { + try { + + // Get all table names from the database + const tablesQuery = ` +SELECT tablename +FROM pg_tables +WHERE schemaname = 'public' +ORDER BY tablename; +`; + + const tablesResult = await client.query(tablesQuery); + const allTables = tablesResult.rows.map(row => row.tablename); + + // Filter out protected tables + const tablesToClean = allTables.filter(table => + !PROTECTED_TABLES.includes(table) + ); + + console.log('Protected tables (will be skipped):', PROTECTED_TABLES); + console.log('Tables to clean:', tablesToClean); + + if (tablesToClean.length === 0) { + console.log('No tables to clean'); + return; + } + + // Disable foreign key checks temporarily + await client.query('SET session_replication_role = replica;'); + console.log('Disabled foreign key constraints'); + + // Clean each table + for (const table of tablesToClean) { + try { + await client.query(`TRUNCATE TABLE "${table}" RESTART IDENTITY CASCADE;`); + console.log(`Cleaned table: ${table}`); + } catch (error) { + console.error(`Failed to clean table ${table}:`, error.message); + // Try alternative deletion method + try { + await client.query(`DELETE FROM "${table}";`); + console.log(`Alternative clean successful for table: ${table}`); + } catch (deleteError) { + console.error(`Alternative clean also failed for table ${table}:`, deleteError.message); + } + } + } + + // Re-enable foreign key checks + await client.query('SET session_replication_role = DEFAULT;'); + console.log('Re-enabled foreign key constraints'); + + console.log('Database cleaning completed successfully'); + + } catch (error) { + console.error('Error during database cleaning:', error); + throw error; + } finally { + await client.end(); + console.log('Database connection closed'); + } +} + +// Execute the cleaning if this file is run directly +if (import.meta.url === `file://${process.argv[1]}`) { + cleanDatabase() + .then(() => { + console.log('Database cleaning process finished'); + process.exit(0); + }) + .catch((error) => { + console.error('Database cleaning failed:', error); + process.exit(1); + }); +} + +export { cleanDatabase }; diff --git a/service/server/test/driver.js b/service/server/test/driver.js new file mode 100644 index 0000000000000000000000000000000000000000..671361a3e1cea978a161e0e7586d8d4a2588b8de --- /dev/null +++ b/service/server/test/driver.js @@ -0,0 +1,208 @@ +/* + * WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + * + * DB test data loader. + */ + +import {getClient} from "../client.js"; + +const client = await getClient() + +async function insertCase(orgId, catId, typeId, name, text, decorations) { + try { + await client.query( + `INSERT INTO cases (org_id, cat_id, type_id, name, text, decorations) VALUES ($1, $2, $3, $4, $5, $6) + RETURNING id, cat_id, type_id, name, text, decorations, retired`, + [orgId, catId, typeId, name, text, decorations] + ); + } catch (error) { + console.error('Error inserting case:', name); + throw error; + } +} + +async function insertFlow(orgId, catId, name, narrative, decorations, cids) { + try { + await client.query( + 'INSERT INTO flows (org_id, cat_id, name, narrative, decorations, cids) VALUES ($1, $2, $3, $4, $5, $6) RETURNING id, name, narrative, decorations, cat_id, cids, retired', + [orgId, catId, name, narrative, decorations, cids] + ); + } catch (error) { + console.error('Error inserting flow:', name); + throw error; + } +} + +async function insertCaseHistory(caseId, caseText, decorations, retired, orgId, slew) { + try { + await client.query( + `INSERT INTO case_history (case_id, case_text, decorations, retired, org_id, change_date) + VALUES ($1, $2, $3, $4, $5, CURRENT_TIMESTAMP + INTERVAL '1 second' * $6) + RETURNING id, case_id, case_text, decorations, retired, change_date`, + [caseId, caseText, decorations, retired, orgId, slew] + ); + } catch (error) { + console.error('Error inserting case history:', name); + throw error; + } +} + +async function insertCaseHistoryTime(caseId, caseText, decorations, retired, orgId, time) { + try { + await client.query( + `INSERT INTO case_history (case_id, case_text, decorations, retired, org_id, change_date) + VALUES ($1, $2, $3, $4, $5, $6) + RETURNING id, case_id, case_text, decorations, retired, change_date`, + [caseId, caseText, decorations, retired, orgId, time] + ); + } catch (error) { + console.error('Error inserting case history:', name); + throw error; + } +} + +async function insertRun(orgId, name, narrative, values, host, flowids, provider_id) { + try { + await client.query( + `INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id) VALUES ($1, $2, $3, $4, $5, $6, $7, $8) RETURNING + id, name, narrative, values, host, run_status, flowids, provider_id`, + [orgId, name, narrative, values, host, 1, flowids, provider_id] + ); + } catch (error) { + console.error('Error inserting run:', error.name); + throw error; + } +} + +async function insertCaseStatsBatch(dataWithOrgId) { + try { + const query = ` + INSERT INTO case_stats (org_id, case_id, flow_id, run_id, result, time_ms, provider_id, entry_timestamp, cost_ic, cost_oc, cost_it, cost_ot, cost_cwit, cost_crit) + SELECT batch.org_id, batch.case_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id, runs.last_update, + batch.cost_ic, batch.cost_oc, batch.cost_it, batch.cost_ot, batch.cost_cwit, batch.cost_crit + FROM json_populate_recordset(null::case_stats, $1) AS batch + JOIN runs ON runs.id = batch.run_id + `; + + await client.query(query, [JSON.stringify(dataWithOrgId)]); + } catch (error) { + console.error('Error inserting case stats batch:', dataWithOrgId.case_id); + throw error; + } +} + +async function createSimulatedRunWithStats( + orgId, + runName, + runNarrative, + flowIds, + providerId, + caseIds, + startDateTime, + costs, + host = 'simulation-host', +) { + try { + // Calculate last_update (2-5 minutes after start_date) + const additionalMinutes = Math.floor(Math.random() * 3) + 2; // Random between 2-5 minutes + const lastUpdateDateTime = new Date(startDateTime.getTime() + additionalMinutes * 60000); + + // Generate simulated values for the run + const simulatedValues = { + totalCases: caseIds.length, + totalFlows: flowIds.length, + simulationType: 'automated', + timestamp: startDateTime.toISOString() + }; + + // Create the simulated run with specific start_date and last_update + const runResult = await client.query( + `INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id, start_date, last_update) + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) RETURNING + id, name, narrative, values, host, run_status, flowids, provider_id, start_date, last_update`, + [ + orgId, + runName, + runNarrative, + JSON.stringify(simulatedValues), + host, + 1, + `{${flowIds.join(',')}}`, + providerId, + startDateTime, + lastUpdateDateTime + ] + ); + + const runId = runResult.rows[0].id; + + // Generate simulated case stats for each combination of case and flow + const simulatedCaseStats = []; + + let index = 0 + for (const flowId of flowIds) { + for (const caseId of caseIds) { + // Generate random simulation results + const isSuccess = Math.random() > 0.2; // 80% success rate + const executionTime = Math.floor(Math.random() * 5000) + 100; // 100-5100ms + + const caseState = { + org_id: orgId, + case_id: caseId, + flow_id: flowId, + run_id: runId, + result: isSuccess ? 8 : 9, + time_ms: executionTime, + provider_id: providerId, + cost_ic: costs[index].cost_ic, + cost_oc: costs[index].cost_oc, + cost_it: costs[index].cost_it, + cost_ot: costs[index].cost_ot, + cost_cwit: costs[index].cost_cwit, + cost_crit: costs[index].cost_crit + }; + + simulatedCaseStats.push(caseState); + } + index++ + } + + // Insert all case stats in a batch + if (simulatedCaseStats.length > 0) { + const query = ` + INSERT INTO case_stats (org_id, case_id, flow_id, run_id, result, time_ms, provider_id, entry_timestamp, cost_ic, cost_oc, cost_it, cost_ot, cost_cwit, cost_crit) + SELECT batch.org_id, batch.case_id, batch.flow_id, batch.run_id, batch.result, batch.time_ms, batch.provider_id, runs.last_update, + batch.cost_ic, batch.cost_oc, batch.cost_it, batch.cost_ot, batch.cost_cwit, batch.cost_crit + FROM json_populate_recordset(null::case_stats, $1) AS batch + JOIN runs ON runs.id = batch.run_id + `; + + await client.query(query, [JSON.stringify(simulatedCaseStats)]); + } + + // Return summary of the simulated run + return { + runId, + runName, + startDate: startDateTime, + lastUpdate: lastUpdateDateTime, + totalCaseStats: simulatedCaseStats.length, + successfulCases: simulatedCaseStats.filter(stat => stat.result === 'PASS').length, + failedCases: simulatedCaseStats.filter(stat => stat.result === 'FAIL').length, + averageExecutionTime: Math.round( + simulatedCaseStats.reduce((sum, stat) => sum + stat.time_ms, 0) / simulatedCaseStats.length + ), + flowIds, + caseIds + }; + + } catch (error) { + console.error('Error creating simulated run with stats:', error); + throw error; + } +} + + + +export { insertCase, insertFlow, insertCaseHistory, insertRun, insertCaseStatsBatch, client, createSimulatedRunWithStats, + insertCaseHistoryTime }; diff --git a/service/server/test/generator.js b/service/server/test/generator.js new file mode 100644 index 0000000000000000000000000000000000000000..ce0bad2ece772ac717a94ecaefefabe2e4954cd6 --- /dev/null +++ b/service/server/test/generator.js @@ -0,0 +1,317 @@ +/* + * WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + * + * DB test data loader. + */ + + + +/* + * WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 + * + * Test data generator with transaction support + */ + +import { getClient } from "../client.js"; +import { insertCase, insertFlow, insertCaseHistory, insertRun, insertCaseStatsBatch } from "./driver.js"; + +const client = await getClient(); + +async function generateTestData() { + const transaction = await client.query('BEGIN'); + + try { + console.log('Starting test data generation...'); + + // Sample organization and category IDs (assuming they exist) + const orgId = 1; + const categoryIds = [1, 2, 3]; + const typeIds = [1, 2, 3]; + const providerId = 1; + + // 1. Insert test cases + console.log('Creating test cases...'); + const caseIds = []; + const testCases = [ + { + name: "User Authentication Test", + text: "Verify user can login with valid credentials", + decorations: JSON.stringify({ priority: "high", tags: ["auth", "security"] }) + }, + { + name: "Database Connection Test", + text: "Test database connectivity and basic queries", + decorations: JSON.stringify({ priority: "critical", tags: ["db", "infrastructure"] }) + }, + { + name: "API Response Time Test", + text: "Measure API response times under normal load", + decorations: JSON.stringify({ priority: "medium", tags: ["performance", "api"] }) + }, + { + name: "Data Validation Test", + text: "Verify input data validation and sanitization", + decorations: JSON.stringify({ priority: "high", tags: ["validation", "security"] }) + }, + { + name: "Error Handling Test", + text: "Test application behavior with invalid inputs", + decorations: JSON.stringify({ priority: "medium", tags: ["error-handling", "robustness"] }) + }, + { + name: "Load Balancer Test", + text: "Verify load balancer distributes requests correctly", + decorations: JSON.stringify({ priority: "high", tags: ["infrastructure", "performance"] }) + } + ]; + + for (let i = 0; i < testCases.length; i++) { + const testCase = testCases[i]; + const result = await client.query( + `INSERT INTO cases (org_id, cat_id, type_id, name, text, decorations) VALUES ($1, $2, $3, $4, $5, $6) RETURNING id`, + [orgId, categoryIds[i % categoryIds.length], 2, + testCase.name, testCase.text, testCase.decorations] + ); + caseIds.push(result.rows[0].id); + } + + // 2. Insert test flows + console.log('Creating test flows...'); + const flowIds = []; + const testFlows = [ + { + name: "Authentication Flow", + narrative: "Complete user authentication and authorization flow", + decorations: JSON.stringify({ environment: "staging", browser: "chrome" }), + cids: [caseIds[0], caseIds[3]] + }, + { + name: "Performance Flow", + narrative: "End-to-end performance testing suite", + decorations: JSON.stringify({ environment: "production", load: "normal" }), + cids: [caseIds[1], caseIds[2], caseIds[5]] + }, + { + name: "Security Flow", + narrative: "Security and vulnerability testing flow", + decorations: JSON.stringify({ environment: "staging", security_level: "high" }), + cids: [caseIds[0], caseIds[3], caseIds[4]] + } + ]; + + for (let i = 0; i < testFlows.length; i++) { + const flow = testFlows[i]; + const result = await client.query( + 'INSERT INTO flows (org_id, cat_id, name, narrative, decorations, cids) VALUES ($1, $2, $3, $4, $5, $6) RETURNING id', + [orgId, categoryIds[i % categoryIds.length], flow.name, flow.narrative, + flow.decorations, `{${flow.cids.join(',')}}`] + //JSON.stringify(flow.cids)] + ); + flowIds.push(result.rows[0].id); + } + + // 3. Generate case history for the past week + console.log('Creating case history...'); + for (let day = 0; day < 7; day++) { + for (let caseIdx = 0; caseIdx < caseIds.length; caseIdx++) { + if (Math.random() > 0.7) { // 30% chance of case update per day + const caseId = caseIds[caseIdx]; + const updatedText = `${testCases[caseIdx].text} - Updated on day ${day + 1}`; + const decorations = JSON.stringify({ + ...JSON.parse(testCases[caseIdx].decorations), + last_modified: `day_${day + 1}`, + version: day + 1 + }); + + await client.query( + `INSERT INTO case_history (case_id, case_text, decorations, retired, org_id, change_date) + VALUES ($1, $2, $3, $4, $5, CURRENT_TIMESTAMP - INTERVAL '${6 - day} days' + INTERVAL '${Math.floor(Math.random() * 24)} hours')`, + [caseId, updatedText, decorations, false, orgId] + ); + } + } + } + + // 4. Generate test runs for the past 5 days + console.log('Creating test runs...'); + const runIds = []; + const hosts = ['test-server-01', 'test-server-02', 'test-server-03']; + + for (let day = 0; day < 5; day++) { + // 2-4 runs per day + const runsPerDay = Math.floor(Math.random() * 3) + 2; + + for (let run = 0; run < runsPerDay; run++) { + const runName = `Automated Run ${day + 1}-${run + 1}`; + const narrative = `Automated test execution on day ${day + 1}, run ${run + 1}`; + const values = JSON.stringify({ + environment: day % 2 === 0 ? 'staging' : 'production', + build_number: `build_${day * 10 + run}`, + commit_hash: `abc123${day}${run}`, + triggered_by: 'scheduler' + }); + const host = hosts[Math.floor(Math.random() * hosts.length)]; + const selectedFlowIds = flowIds.slice(0, Math.floor(Math.random() * flowIds.length) + 1); + + const result = await client.query( + `INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id, last_update) + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, CURRENT_TIMESTAMP - INTERVAL '${4 - day} days' + INTERVAL '${run * 6} hours') + RETURNING id`, + [orgId, runName, narrative, values, host, 1, + //JSON.stringify(selectedFlowIds) + `{${selectedFlowIds.join(',')}}` + , providerId] + ); + runIds.push(result.rows[0].id); + } + } + + // 5. Generate case statistics for all runs + console.log('Creating case statistics...'); + const caseStatsData = []; + + for (const runId of runIds) { + for (const flowId of flowIds) { + // Get cases for this flow + const flowResult = await client.query('SELECT cids FROM flows WHERE id = $1', [flowId]); + if (flowResult.rows.length > 0) { + const cidsData = flowResult.rows[0].cids; + let casesInFlow; + + if (Array.isArray(cidsData)) { + casesInFlow = cidsData; + } else if (typeof cidsData === 'string') { + casesInFlow = JSON.parse(cidsData); + } else { + casesInFlow = []; + } + + for (const caseId of casesInFlow) { + // Generate realistic test results + const success_rate = 0.85; // 85% success rate + const result = Math.random() < success_rate ? 8 : 9; + const baseTime = Math.floor(Math.random() * 5000) + 100; // 100-5100ms + const timeMs = result === 'FAIL' ? baseTime * 2 : baseTime; // Failed tests take longer + + caseStatsData.push({ + org_id: orgId, + case_id: caseId, + flow_id: flowId, + run_id: runId, + result: result, + time_ms: timeMs, + provider_id: providerId + }); + } + } + } + } + + // Insert case stats in batch + if (caseStatsData.length > 0) { + await insertCaseStatsBatch(caseStatsData); + } + + await client.query('COMMIT'); + console.log('Test data generation completed successfully!'); + console.log(`Generated: + - ${caseIds.length} test cases + - ${flowIds.length} test flows + - ${runIds.length} test runs + - ${caseStatsData.length} case statistics records`); + + } catch (error) { + await client.query('ROLLBACK'); + console.error('Error generating test data:', error); + throw error; + } +} + +// Helper function to generate additional random test data +async function generateRandomTestCycle(orgId = 1, days = 3) { + const transaction = await client.query('BEGIN'); + + try { + console.log(`Generating ${days} days of random test cycle data...`); + + // Get existing cases and flows + const casesResult = await client.query('SELECT id FROM cases WHERE org_id = $1 LIMIT 10', [orgId]); + const flowsResult = await client.query('SELECT id FROM flows WHERE org_id = $1 LIMIT 5', [orgId]); + + if (casesResult.rows.length === 0 || flowsResult.rows.length === 0) { + throw new Error('No existing cases or flows found. Run generateTestData() first.'); + } + + const caseIds = casesResult.rows.map(row => row.id); + const flowIds = flowsResult.rows.map(row => row.id); + const hosts = ['ci-runner-01', 'ci-runner-02', 'ci-runner-03']; + + const runIds = []; + + // Generate runs for each day + for (let day = 0; day < days; day++) { + const runsToday = Math.floor(Math.random() * 4) + 1; // 1-4 runs per day + + for (let runIdx = 0; runIdx < runsToday; runIdx++) { + const runName = `CI Run ${new Date().toISOString().split('T')[0]}-${day}-${runIdx}`; + const narrative = `Continuous integration run triggered by code commit`; + const values = JSON.stringify({ + trigger: 'git_push', + branch: Math.random() > 0.8 ? 'develop' : 'main', + commit_author: `developer_${Math.floor(Math.random() * 5) + 1}`, + test_suite: 'full' + }); + + const result = await client.query( + `INSERT INTO runs (org_id, name, narrative, values, host, run_status, flowids, provider_id, last_update) + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, CURRENT_TIMESTAMP - INTERVAL '${days - day - 1} days' + INTERVAL '${runIdx * 4} hours') + RETURNING id`, + [orgId, runName, narrative, values, hosts[runIdx % hosts.length], + 8, + //JSON.stringify(flowIds) + `{${flowIds.join(',')}}`, 1] + ); + runIds.push(result.rows[0].id); + } + } + + // Generate case stats for new runs + const newCaseStats = []; + for (const runId of runIds) { + for (const flowId of flowIds) { + // Random subset of cases for this flow + const casesToTest = caseIds.slice(0, Math.floor(Math.random() * caseIds.length) + 1); + + for (const caseId of casesToTest) { + const result = Math.random() < 0.9 ? 8 : 9; // 90% success rate + const timeMs = Math.floor(Math.random() * 3000) + 50; + + newCaseStats.push({ + org_id: orgId, + case_id: caseId, + flow_id: flowId, + run_id: runId, + result: result, + time_ms: timeMs, + provider_id: 1 + }); + } + } + } + + if (newCaseStats.length > 0) { + await insertCaseStatsBatch(newCaseStats); + } + + await client.query('COMMIT'); + console.log(`Random test cycle completed! Generated ${runIds.length} runs with ${newCaseStats.length} test results.`); + + } catch (error) { + await client.query('ROLLBACK'); + console.error('Error generating random test cycle:', error); + throw error; + } +} + +export { generateTestData, generateRandomTestCycle }; + diff --git a/service/server/test/test1.json b/service/server/test/test1.json new file mode 100644 index 0000000000000000000000000000000000000000..be8a488adf236244ef2dad9fb2d18c3d986fa521 --- /dev/null +++ b/service/server/test/test1.json @@ -0,0 +1,73 @@ +{ + "days": 5, + "minInterval": 10, + "maxInterval": 180, + "flowIds": [1, 2, 3], + "caseIds": [1, 2, 3, 4, 5, 6], + "costs": [ + { + "cost_ic": 0.02280, + "cost_oc": 0.01036, + "cost_it": 100, + "cost_ot": 691, + "cost_cwit": 5927, + "cost_crit": 1811, + "case": "Case 1" + }, + { + "cost_ic": 0.013974, + "cost_oc": 0.005865, + "cost_it": 200, + "cost_ot": 391, + "cost_cwit": 3720, + "cost_crit": 0, + "case": "Case 2" + }, + { + "cost_ic": 0.01159, + "cost_oc": 0.004905, + "cost_it": 50, + "cost_ot": 327, + "cost_cwit": 2943, + "cost_crit": 1797, + "case": "Case 3" + }, + { + "cost_ic": 0.01929, + "cost_oc": 0.00897, + "cost_it": 120, + "cost_ot": 598, + "cost_cwit": 4985, + "cost_crit": 1897, + "case": "Case 4" + }, + { + "cost_ic": 0.037194, + "cost_oc": 0.017235, + "cost_it": 220, + "cost_ot": 1149, + "cost_cwit": 9614, + "cost_crit": 3585, + "case": "Case 5" + }, + { + "cost_ic": 0.04259, + "cost_oc": 0.020805, + "cost_it": 93, + "cost_ot": 1387, + "cost_cwit": 10817, + "cost_crit": 6544, + "case": "Case 6" + } + ], + "changeChance": 0.10, + "changeInnerChance": 0.8, + "changeChanceSmall": 0.50, + "changeSizeSmall": 0.04, + "changeChanceMedium": 0.83, + "changeSizeMedium": 0.10, + "changeSizeLarge": 0.3, + "changeBounds": 0.3, + "milestoneChance": 0.15 + +} \ No newline at end of file diff --git a/service/server/test/test_1.js b/service/server/test/test_1.js new file mode 100644 index 0000000000000000000000000000000000000000..e529dcf3ec9cecaca5cbe6e9a7b4353ec5e02a07 --- /dev/null +++ b/service/server/test/test_1.js @@ -0,0 +1,176 @@ +// Add this at the end of generator.js to execute the function +import {createSimulatedRunWithStats} from "./driver.js"; + +let baselineCosts = [ + { + cost_ic: 0.02280, // Case 1 + cost_oc: 0.01036, + cost_it: 11, + cost_ot: 691, + cost_cwit: 5927, + cost_crit: 1811 + }, + { + cost_ic: 0.013974, // Case 2 + cost_oc: 0.005865, + cost_it: 8, + cost_ot: 391, + cost_cwit: 3720, + cost_crit: 0 + }, + { + cost_ic: 0.01159, // Case 3 + cost_oc: 0.004905, + cost_it: 7, + cost_ot: 327, + cost_cwit: 2943, + cost_crit: 1797 + }, + { + cost_ic: 0.01929, // Case 4 + cost_oc: 0.00897, + cost_it: 11, + cost_ot: 598, + cost_cwit: 4985, + cost_crit: 1897 + }, + { + cost_ic: 0.037194, // Case 5 + cost_oc: 0.017235, + cost_it: 22, + cost_ot: 1149, + cost_cwit: 9614, + cost_crit: 3585 + }, + { + cost_ic: 0.04259, // Case 6 + cost_oc: 0.020805, + cost_it: 24, + cost_ot: 1387, + cost_cwit: 10817, + cost_crit: 6544 + } +] + +let runningCosts = baselineCosts + +let trends = { + cost_ic: 0, + cost_oc: 0, + cost_it: 0, + cost_ot: 0, + cost_cwit: 0, + cost_crit: 0 +}; + +function updateRunningCosts() { + const costKeys = ['cost_ic', 'cost_oc', 'cost_it', 'cost_ot', 'cost_cwit', 'cost_crit']; + + // Randomly change trends + costKeys.forEach(key => { + // 30% chance to change trend direction/magnitude + if (Math.random() < 0.3) { + const rand = Math.random(); + let trendChange; + + if (rand < 0.833) { // ~83.3% chance for small changes + // Small change: -2% to +2% + trendChange = (Math.random() - 0.5) * 0.04; // -0.02 to +0.02 + } else if (rand < 0.958) { // ~12.5% chance for medium changes + // Medium change: -10% to +10% + trendChange = (Math.random() - 0.5) * 0.2; // -0.1 to +0.1 + } else { // ~4.2% chance for large changes + // Large change: -20% to +20% + trendChange = (Math.random() - 0.5) * 0.4; // -0.2 to +0.2 + } + + trends[key] += trendChange; + + // Keep trends within reasonable bounds (-0.3 to +0.3) + trends[key] = Math.max(-0.3, Math.min(0.3, trends[key])); + } + }); + + // Apply trends to each case + runningCosts.forEach((costCase, caseIndex) => { + costKeys.forEach(key => { + const baseValue = baselineCosts[caseIndex][key]; + const minValue = baseValue * 0.5; // 50% minimum + const maxValue = baseValue * 2.5; // 250% maximum + + // Apply trend with some random variation (-5% to +5%) + const randomVariation = (Math.random() - 0.5) * 0.1; // -0.05 to +0.05 + const totalChange = trends[key] + randomVariation; + + // Calculate new value + let newValue = baseValue * (1 + totalChange); + + // Ensure we stay within bounds + newValue = Math.max(minValue, Math.min(maxValue, newValue)); + + // Round appropriately based on value type + if (key === 'cost_it' || key === 'cost_ot' || key === 'cost_cwit' || key === 'cost_crit') { + // Integer values + costCase[key] = Math.round(newValue); + } else { + // Decimal values - round to 6 decimal places + costCase[key] = Math.round(newValue * 1000000) / 1000000; + } + }); + }); +} + +async function createTestSimulation(orgId, number, startTime) { + updateRunningCosts(); + + return await createSimulatedRunWithStats( + orgId, // orgId + 'Test #' + number, + 'Test #' + number, + [1, 2, 3], // flowIds + 1, // providerId + [1, 2, 3, 4, 5, 6], // caseIds + startTime, // startDateTime + runningCosts + ); +} + +async function test1(orgId, initialTestNumber = 1) { + + const startDate = new Date(Date.now() - 2 * 24 * 60 * 60 * 1000); + const endDate = new Date(); + + let currentTime = new Date(startDate); + let testNumber = initialTestNumber; + const results = []; + + while (currentTime <= endDate) { + const result = await createTestSimulation(orgId, testNumber, new Date(currentTime)); + results.push(result); + + // Add random time between 5 minutes (300000ms) and 2 hours (7200000ms) + const minInterval = 5 * 60 * 1000; // 5 minutes in milliseconds + const maxInterval = 2 * 60 * 60 * 1000; // 2 hours in milliseconds + const randomInterval = Math.floor(Math.random() * (maxInterval - minInterval + 1)) + minInterval; + + currentTime = new Date(currentTime.getTime() + randomInterval); + testNumber++; + } + + return results; +} + +async function main() { + try { + await test1(1, process.argv[2]); + console.log('Test data complete.'); + process.exit(0); + } catch (error) { + console.error('Error loading test data:', error); + process.exit(1); + } +} + +// Execute the main function +main(); + diff --git a/service/server/test/test_baseline.js b/service/server/test/test_baseline.js new file mode 100644 index 0000000000000000000000000000000000000000..4e2dea9c901b20dd8d6e41c473954ef34b799cfa --- /dev/null +++ b/service/server/test/test_baseline.js @@ -0,0 +1,17 @@ +// Add this at the end of generator.js to execute the function +import {generateTestData} from "./generator.js"; + +async function main() { + try { + await generateTestData(); + console.log('Test data generation completed successfully!'); + process.exit(0); + } catch (error) { + console.error('Error generating test data:', error); + process.exit(1); + } +} + +// Execute the main function +main(); + diff --git a/service/server/test/test_construct.js b/service/server/test/test_construct.js new file mode 100644 index 0000000000000000000000000000000000000000..4f870326b75cbc31799d4ae45819acfbbd1c7559 --- /dev/null +++ b/service/server/test/test_construct.js @@ -0,0 +1,251 @@ +// Add this at the end of generator.js to execute the function +import {createSimulatedRunWithStats, insertCaseHistoryTime} from "./driver.js"; +import {readFileSync} from "node:fs"; + +async function milestone(caseId, time, orgId) { + // Generate random case_text + const sampleTexts = [ + "Case updated with new findings", + "Additional documentation added", + "Status review completed", + "Investigation notes updated", + "Case analysis refined", + "New evidence incorporated", + "Progress assessment completed", + "Case details verified", + "Documentation review finished", + "Case information consolidated" + ]; + + const caseText = sampleTexts[Math.floor(Math.random() * sampleTexts.length)]; + + // Generate random decorations (assuming it's a JSON object) + const decorationOptions = [ + { priority: "high", status: "active", tags: ["urgent"] }, + { priority: "medium", status: "pending", tags: ["review"] }, + { priority: "low", status: "complete", tags: ["archived"] }, + { priority: "high", status: "investigating", tags: ["critical", "ongoing"] }, + { priority: "medium", status: "waiting", tags: ["external"] }, + { priority: "low", status: "documented", tags: ["complete", "verified"] } + ]; + + const decorations = JSON.stringify(decorationOptions[Math.floor(Math.random() * decorationOptions.length)]); + + // retired should always be false + const retired = false; + + try { + return await insertCaseHistoryTime(caseId, caseText, decorations, retired, orgId, time); + } catch (error) { + console.error('Error inserting case history with time:', error); + throw error; + } +} + +let changes +function clearChanges() { + changes = { + cost_ic: 0, + cost_oc: 0, + cost_it: 0, + cost_ot: 0, + cost_cwit: 0, + cost_crit: 0 + } +} + +async function updateRunningCosts(cases, orgId) { + const costKeys = ['cost_ic', 'cost_oc', 'cost_it', 'cost_ot', 'cost_cwit', 'cost_crit']; + + // Apply changes to each case + for (let caseIndex = 0; caseIndex < runningCosts.length; caseIndex++) { + const costCase = runningCosts[caseIndex]; + + for (const key of costKeys) { + + // Changes apply to each case + // Randomly change changes + clearChanges() + if (Math.random() < changeChance) { + costKeys.forEach(key => { + if (Math.random() < changeInnerChance) { + const rand = Math.random(); + let changeChange; + + if (rand < changeChanceSmall) { + changeChange = (Math.random() - 0.5) * changeSizeSmall; + } else if (rand < changeChanceMedium) { + changeChange = (Math.random() - 0.5) * changeSizeMedium; + } else { + changeChange = (Math.random() - 0.5) * changeSizeLarge; + } + + changes[key] += changeChange; + + // Keep changes within reasonable bounds (-0.3 to +0.3) + changes[key] = Math.max(-changeBounds, Math.min(changeBounds, changes[key])); + } + }); + if (Math.random() < milestoneChance) { + await milestone(cases[caseIndex], new Date(currentTime.getTime() - (Math.random() * 20 + 5) * 60 * 1000),orgId) + } + } + + const baseValue = baselineCosts[caseIndex][key]; + const minValue = baseValue * 0.5; // 50% minimum + const maxValue = baseValue * 2.5; // 250% maximum + + // Apply change with some random variation (-5% to +5%) + //const randomVariation = (Math.random() - 0.5) * 0.1; // -0.05 to +0.05 + //const totalChange = changes[key] + randomVariation; + + // Calculate new value + let newValue = baseValue * (1 + changes[key]); + + // Ensure we stay within bounds + if (newValue < minValue) { + newValue = minValue; + } else if (newValue > maxValue) { + newValue = maxValue; + } + + // Round appropriately based on value type + if (key === 'cost_it' || key === 'cost_ot' || key === 'cost_cwit' || key === 'cost_crit') { + // Integer values + costCase[key] = Math.round(newValue); + } else { + // Decimal values - round to 6 decimal places + costCase[key] = Math.round(newValue * 1000000) / 1000000; + } + } + } +} + +async function createTestSimulation(orgId, number, startTime) { + await updateRunningCosts(caseIds, orgId); + await createSimulatedRunWithStats( + orgId, // orgId + 'Test #' + number, + 'Test #' + number, + flowIds, // flowIds + 1, // providerId + caseIds, // caseIds + startTime, // startDateTime + runningCosts + ); +} + +async function run(orgId) { + currentTime = new Date(startDate); + let testNumber = initialTestNumber; + + while (currentTime <= endDate) { + await createTestSimulation(orgId, testNumber, new Date(currentTime)); + const randomInterval = Math.floor(Math.random() * (maxInterval - minInterval + 1)) + minInterval; + currentTime = new Date(currentTime.getTime() + randomInterval); + testNumber++; + } +} + +let initialTestNumber = 1 +let currentTime +let startDate = new Date(Date.now() - 2 * 24 * 60 * 60 * 1000); +const endDate = new Date(); +let runningCosts +let baselineCosts +let minInterval = 10 * 60 * 1000; // 5 minutes in milliseconds +let maxInterval = 3 * 60 * 60 * 1000; // 2 hours in milliseconds +let flowIds = [1, 2, 3]; +let caseIds = [1, 2, 3, 4, 5, 6]; +let changeInnerChance = 0.7 +let changeChance = 0.2 +let changeChanceSmall = 0.833 +let changeSizeSmall = 0.04 // -0.02 to +0.02 - the actual size is half this number. +let changeChanceMedium = 0.958 +let changeSizeMedium = 0.02 // -0.1 to +0.1 +let changeSizeLarge = 0.4 // -0.2 to +0.2 +let changeBounds = 0.3 +let milestoneChance = 0.3 + +async function load(orgId, filePath, inum = 1) { + initialTestNumber = inum + + const fileContent = readFileSync(filePath, 'utf8'); + let d = JSON.parse(fileContent); + + if ('costs' in d) { + baselineCosts = d.costs; + } else { + throw new Error('Missing required "costs" key in the data file'); + } + runningCosts = baselineCosts; + + if ('days' in d) { + startDate = Date.now() - d.days * 24 * 60 * 60 * 1000; + } + + if ('minInterval' in d) { + minInterval = d.minInterval * 60 * 1000; + } + if ('maxInterval' in d) { + maxInterval = d.maxInterval * 60 * 1000; + } + if (minInterval >= maxInterval) { + throw new Error('minInterval cannot be equal to or more than maxInterval'); + } + + if ('flowIds' in d) { + flowIds = d.flowIds; + } + if ('caseIds' in d) { + caseIds = d.caseIds; + } + if (caseIds.length !== baselineCosts.length) { + throw new Error('caseIds and costs arrays must have the same length'); + } + + // Check for change configuration + if ('changeChance' in d) { + changeChance = d.changeChance; + } + if ('changeInnerChance' in d) { + changeInnerChance = d.changeInnerChance; + } + if ('changeChanceSmall' in d) { + changeChanceSmall = d.changeChanceSmall; + } + if ('changeSizeSmall' in d) { + changeSizeSmall = d.changeSizeSmall; + } + if ('changeChanceMedium' in d) { + changeChanceMedium = d.changeChanceMedium; + } + if ('changeSizeMedium' in d) { + changeSizeMedium = d.changeSizeMedium; + } + if ('changeSizeLarge' in d) { + changeSizeLarge = d.changeSizeLarge; + } + if ('changeBounds' in d) { + changeBounds = d.changeBounds; + } + if ('milestoneChance' in d) { + milestoneChance = d.milestoneChance; + } +} + +async function main() { + try { + await load(1, process.argv[2], process.argv[3]); + await run(1) + console.log('Test data complete.'); + process.exit(0); + } catch (error) { + console.error('Error loading test data:', error); + process.exit(1); + } +} + +// Execute the main function +main(); + diff --git a/service/server/test/test_large.js b/service/server/test/test_large.js new file mode 100644 index 0000000000000000000000000000000000000000..c5eefedc9fefaef3b79cf834864c0b259e580a6b --- /dev/null +++ b/service/server/test/test_large.js @@ -0,0 +1,17 @@ +// Add this at the end of generator.js to execute the function +import {generateRandomTestCycle, generateTestData} from "./generator.js"; + +async function main() { + try { + await generateRandomTestCycle(1, 60); + console.log('Test data generation completed successfully!'); + process.exit(0); + } catch (error) { + console.error('Error generating test data:', error); + process.exit(1); + } +} + +// Execute the main function +main(); + diff --git a/service/service/dispatcher_message.go b/service/service/dispatcher_message.go index c375c0cdb09e02c10b87006a6cd60fe9ce8c6086..773f3534549cb5b53dbcbd389a977abe83a217ac 100644 --- a/service/service/dispatcher_message.go +++ b/service/service/dispatcher_message.go @@ -12,6 +12,9 @@ import ( "gitlab.com/ginfra/ginfra/base" "gitlab.com/ginfra/wwwherd/common/data" "gitlab.com/ginfra/wwwherd/common/stream" + "gitlab.com/ginfra/wwwherd/service/local" + "go.uber.org/zap" + "time" ) // ##################################################################################################################### @@ -43,30 +46,138 @@ func (d *Dispatcher) receiveMessage(raw *stream.RawMessage) error { } func (d *Dispatcher) handleRunStatus(msg *stream.MsgRunStatus) error { - return d.gis.Providers.PsqlProvider.SetRunStatusAndInfo(msg.RunId, msg.Status, &msg.Info) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunStatusAndInfo(msg.RunId, msg.Status, &msg.Info) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("MsgRunStatus: Failed to set run status and info", + zap.Error(err), + zap.String("msg.name", "MsgRunStatus"), + zap.Int("runId", msg.RunId), + zap.String("status", string(msg.Status)), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("MsgRunStatus: Successfully set run status and info", + zap.String("msg.name", "MsgRunStatus"), + zap.Int("runId", msg.RunId), + zap.String("status", string(msg.Status)), + zap.Duration("execution_time", executionTime)) + } + return err } func (d *Dispatcher) handleRunOrders(msg *stream.MsgRunOrders) error { if msg.Final { d.handleFlowCaseStats(msg.RunId, msg.Orders) } - return d.gis.Providers.PsqlProvider.SetRunOrders(msg.RunId, msg.Orders) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunOrders(msg.RunId, msg.Orders) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("Failed to set run orders", + zap.Error(err), + zap.String("msg.name", "MsgRunOrders"), + zap.Int("runId", msg.RunId), + zap.Bool("final", msg.Final), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("Successfully set run orders", + zap.String("msg.name", "MsgRunOrders"), + zap.Int("runId", msg.RunId), + zap.Bool("final", msg.Final), + zap.Duration("execution_time", executionTime)) + } + return err } func (d *Dispatcher) handleRunCurrentFlow(msg *stream.MsgRunCurrentFlow) error { - return d.gis.Providers.PsqlProvider.SetRunCurrentFlow(msg.RunId, msg.FlowId) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunCurrentFlow(msg.RunId, msg.FlowId) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("Failed to set run current flow", + zap.Error(err), + zap.String("msg.name", "MsgRunCurrentFlow"), + zap.Int("runId", msg.RunId), + zap.Int("flowId", msg.FlowId), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("Successfully set run current flow", + zap.String("msg.name", "MsgRunCurrentFlow"), + zap.Int("runId", msg.RunId), + zap.Int("flowId", msg.FlowId), + zap.Duration("execution_time", executionTime)) + } + return err } func (d *Dispatcher) handleRunCurrentCase(msg *stream.MsgRunCurrentCase) error { - return d.gis.Providers.PsqlProvider.SetRunCurrentCase(msg.RunId, msg.CaseId) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunCurrentCase(msg.RunId, msg.CaseId) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("Failed to set run current case", + zap.Error(err), + zap.String("msg.name", "MsgRunCurrentCase"), + zap.Int("runId", msg.RunId), + zap.Int("caseId", msg.CaseId), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("Successfully set run current case", + zap.String("msg.name", "MsgRunCurrentCase"), + zap.Int("runId", msg.RunId), + zap.Int("caseId", msg.CaseId), + zap.Duration("execution_time", executionTime)) + } + return err } func (d *Dispatcher) handleCancelComplete(msg *stream.MsgCancelComplete) error { name := getCancelName(msg.OrgId, msg.RunId) delete(d.pendingCancel, name) - return d.gis.Providers.PsqlProvider.SetRunStatusAndInfo(msg.RunId, data.RunStatusCancelled, base.StringPointer("Cancelled.")) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunStatusAndInfo(msg.RunId, data.RunStatusCancelled, base.StringPointer("Cancelled.")) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("Failed to set run status to cancelled", + zap.Error(err), + zap.String("msg.name", "MsgCancelComplete"), + zap.Int("runId", msg.RunId), + zap.Int("orgId", msg.OrgId), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("Successfully set run status to cancelled", + zap.String("msg.name", "MsgCancelComplete"), + zap.Int("runId", msg.RunId), + zap.Int("orgId", msg.OrgId), + zap.Duration("execution_time", executionTime)) + } + return err } func (d *Dispatcher) handleRunLog(msg *stream.MsgRunLog) error { - return d.gis.Providers.PsqlProvider.SetRunLog(msg.OrgId, msg.RunId, msg.Text) + start := time.Now() + err := d.gis.Providers.PsqlProvider.SetRunLog(msg.OrgId, msg.RunId, msg.Text) + executionTime := time.Since(start) + + if err != nil { + local.Logger.Error("Failed to set run log", + zap.Error(err), + zap.String("msg.name", "MsgRunLog"), + zap.Int("runId", msg.RunId), + zap.Int("orgId", msg.OrgId), + zap.Duration("execution_time", executionTime)) + } else { + local.Logger.Info("Successfully set run log", + zap.String("msg.name", "MsgRunLog"), + zap.Int("runId", msg.RunId), + zap.Int("orgId", msg.OrgId), + zap.Duration("execution_time", executionTime)) + } + return err } diff --git a/service/service/dispatcher_runs.go b/service/service/dispatcher_runs.go index 259d793bb35f7bfc8e15a925d5cc05fb796fcbf4..b886fac32f98ea5e03342effdf56ec86d87414a0 100644 --- a/service/service/dispatcher_runs.go +++ b/service/service/dispatcher_runs.go @@ -23,24 +23,54 @@ import ( // ##################################################################################################################### // # WORK +func fixProvider(p *local.DbProvider) (string, string) { + var ( + u = p.URL + m = p.Model + ) + switch p.Type { + case data.ProviderAnthropic: + if m == "" { + m = data.DefaultLLMModelAnthropic + } + + case data.ProviderOpenRouter: + if m == "" { + m = data.DefaultLLMModelOpenApi + } + if u == "" { + u = "https://openrouter.ai/api/v1/" + } + + case data.ProviderSelfHost: + //NOP + break + + default: + panic(base.NewGinfraError("Unknown provider type.")) + } + + return u, m +} + func (d *Dispatcher) discoverAndGetAuth() (string, string, error) { // TODO Do we want to cache these? // We need the service surl, err := d.gis.ServiceContext.ConfigProvider.DiscoverService(d.gis.ServiceContext.Scfg.DeploymentName, - "magnitude/test") + "runner/test") if err != nil || surl == "" { - local.Logger.Warn("Could not discover magnitude service. Stalling.", zap.Error(err)) + local.Logger.Warn("Could not discover runner service. Stalling.", zap.Error(err)) time.Sleep(local.DispatchStallTime * time.Millisecond) - return "", "", base.NewGinfraError("Could not discover magnitude service.") + return "", "", base.NewGinfraError("Could not discover runner service.") } // And the service auth var mauth string scfg := local.GetConfiguredServiceSpecifics(d.gis.ServiceContext) if scfg.Err == nil { - if mauth, err = scfg.GetValue(local.ConfigManitudeAuth).GetValueString(); err != nil { - local.Logger.Warn("Could not get magnitude service authorization token. Stalling.", zap.Error(err)) + if mauth, err = scfg.GetValue(local.ConfigRunnerAuth).GetValueString(); err != nil { + local.Logger.Warn("Could not get runner service authorization token. Stalling.", zap.Error(err)) time.Sleep(local.DispatchStallTime * time.Millisecond) - return "", "", base.NewGinfraError("Could not get magnitude service authorization token") + return "", "", base.NewGinfraError("Could not get runner service authorization token") } } return surl, mauth, nil @@ -102,6 +132,7 @@ func (d *Dispatcher) dispatchNew(runs []local.DbRun) { if provider, err = d.gis.Providers.PsqlProvider.GetProviderInfo(run.ProvId); err != nil { panic(base.NewGinfraErrorChild("Could not get provider from database.", err)) } + providerUrl, providerModel := fixProvider(provider) // Create a client var rc *client.ClientWithResponses @@ -119,9 +150,9 @@ func (d *Dispatcher) dispatchNew(runs []local.DbRun) { Uid: strconv.Itoa(run.OrgID), Provider: strconv.Itoa(provider.Type), Values: run.Values, - ProviderUrl: base.StringPtrOrNil(provider.URL), + ProviderUrl: base.StringPtrOrNil(providerUrl), ProviderToken: base.StringPtrOrNil(provider.AuthKey), - ProviderModel: base.StringPtrOrNil(provider.Model), + ProviderModel: base.StringPtrOrNil(providerModel), } if params.Values != nil && len(*params.Values) == 0 { // TODO workaround because the stub is marking his as required for some reason when clearly the spec says otherwise. @@ -156,7 +187,7 @@ func (d *Dispatcher) dispatchNew(runs []local.DbRun) { func (d *Dispatcher) dispatchCancel(runs []local.DbRun) { surl, mauth, err := d.discoverAndGetAuth() if err != nil { - local.Logger.Warn("Could not discover magnitude service. Ignoring cancellations.", zap.Error(err)) + local.Logger.Warn("Could not discover runner service. Ignoring cancellations.", zap.Error(err)) return } diff --git a/service/service/stats.go b/service/service/stats.go index f6303262031adade5c5c9cb13a0de05f3bf11140..774f25c388b75d52cd313cdca3b9cb9f8d6d9585 100644 --- a/service/service/stats.go +++ b/service/service/stats.go @@ -11,9 +11,12 @@ import ( "encoding/json" "fmt" "gitlab.com/ginfra/ginfra/base" + "gitlab.com/ginfra/ginfra/common/service/gotel" "gitlab.com/ginfra/wwwherd/service/local" "go.uber.org/zap" "strconv" + "strings" + "time" ) // ##################################################################################################################### @@ -32,6 +35,66 @@ func convertToInt(i interface{}) (int, error) { } } +func extractCostFields(d map[string]interface{}) (float64, float64, int, int, int, int) { + var c_cost_ic, c_cost_oc float64 + var c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit int + + // Check if "costs" exists in the map + costsInterface, ok := d["costs"] + if !ok { + return c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit + } + + // Extract costs as a string + costsString, ok := costsInterface.(string) + if !ok { + return c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit + } + + // Tokenize the string on whitespace + tokens := strings.Fields(costsString) + + // Match tokens to variables + for _, token := range tokens { + parts := strings.Split(token, ":") + if len(parts) != 2 { + continue + } + + key := parts[0] + value := parts[1] + + switch key { + case "inputCost": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_ic = val + } + case "outputCost": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_oc = val + } + case "inputTokens": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_it = int(val) + } + case "outputTokens": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_ot = int(val) + } + case "cacheWriteInputTokens": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_cwit = int(val) + } + case "cacheReadInputTokens": + if val, err := strconv.ParseFloat(value, 64); err == nil { + c_cost_crit = int(val) + } + } + } + + return c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit +} + func extractFlowFields(d map[string]interface{}) (int, int, int, error) { // Extract idInterface, ok := d["id"] @@ -44,7 +107,7 @@ func extractFlowFields(d map[string]interface{}) (int, int, int, error) { } timeInterface, ok := d["time"] if !ok { - return 0, 0, 0, fmt.Errorf("missing 'time' field") + timeInterface = "0" } // Convert to int @@ -56,11 +119,11 @@ func extractFlowFields(d map[string]interface{}) (int, int, int, error) { if err != nil { return 0, 0, 0, base.NewGinfraErrorChild("failed to convert 'result' to integer.", err) } - time, err := convertToInt(timeInterface) + ftime, err := convertToInt(timeInterface) if err != nil { return 0, 0, 0, base.NewGinfraErrorChild("failed to convert 'result' to integer.", err) } - return id, result, time, nil + return id, result, ftime, nil } func (d *Dispatcher) handleFlowCaseStats(RunId int, orders string) { @@ -98,6 +161,10 @@ func (d *Dispatcher) handleFlowCaseStats(RunId int, orders string) { // Loop through all flows in the data for _, flowInterface := range flows { + //-- Flow data ------------------------------------------------------------------------------------------------- + var f_cost_ic, f_cost_oc float64 + var f_cost_it, f_cost_ot, f_cost_cwit, f_cost_crit int + flow, ok := flowInterface.(map[string]interface{}) if !ok { local.Logger.Error("FlowCaseStats: Flow is not a valid object") @@ -113,16 +180,7 @@ func (d *Dispatcher) handleFlowCaseStats(RunId int, orders string) { local.Logger.Info("FlowCaseStats: Flow stats", zap.Int("flow.id", flowId), zap.Int("flow.result", flowResult), zap.Int("flow.time", flowTime), zap.Int("run.id", RunId), zap.Int("provider.id", run.ProvId)) - // Add to flowdata. The json must match the db schema since it will be streamed directly into it. - fd := map[string]interface{}{ - "flow_id": flowId, - "run_id": RunId, - "result": flowResult, - "time_ms": flowTime, - "provider_id": run.ProvId, - } - flowD = append(flowD, fd) - + //-- Case data and report -------------------------------------------------------------------------------------- // Get the cases array from the flow casesInterface, ok := flow["cases"] if !ok { @@ -153,21 +211,58 @@ func (d *Dispatcher) handleFlowCaseStats(RunId int, orders string) { local.Logger.Info("FlowCaseStats: Flow stats", zap.Int("case.id", caseId), zap.Int("case.result", caseResult), zap.Int("case.time", caseTime), zap.Int("run.id", RunId), zap.Int("provider.id", run.ProvId)) - // Add to casedata. The json must match the db schema since it will be streamed directly into it. - cd := map[string]interface{}{ - "case_id": caseId, + // Extract costs + c_cost_ic, c_cost_oc, c_cost_it, c_cost_ot, c_cost_cwit, c_cost_crit := extractCostFields(caseObj) + f_cost_ic += c_cost_ic + f_cost_oc += c_cost_oc + f_cost_it += c_cost_it + f_cost_ot += c_cost_ot + f_cost_cwit += c_cost_cwit + f_cost_crit += c_cost_crit + + if c_cost_it > 0 && c_cost_ot > 0 { + // Add to casedata if there was at least some tokens passe. The json must match the db schema since it will be streamed directly into it. + cd := map[string]interface{}{ + "case_id": caseId, + "flow_id": flowId, + "run_id": RunId, + "result": caseResult, + "time_ms": caseTime, + "provider_id": run.ProvId, + "cost_ic": c_cost_ic, + "cost_oc": c_cost_oc, + "cost_it": c_cost_it, + "cost_ot": c_cost_ot, + "cost_cwit": c_cost_cwit, + "cost_crit": c_cost_crit, + } + caseD = append(caseD, cd) + } + } + + //-- Flow report ----------------------------------------------------------------------------------------------- + + // Add to flowdata if at least some tokens are passed. The json must match the db schema since it will be streamed directly into it. + if f_cost_it > 0 && f_cost_ot > 0 { + fd := map[string]interface{}{ "flow_id": flowId, "run_id": RunId, - "result": caseResult, - "time_ms": caseTime, + "result": flowResult, + "time_ms": flowTime, "provider_id": run.ProvId, + "cost_ic": f_cost_ic, + "cost_oc": f_cost_oc, + "cost_it": f_cost_it, + "cost_ot": f_cost_ot, + "cost_cwit": f_cost_cwit, + "cost_crit": f_cost_crit, } - caseD = append(caseD, cd) - + flowD = append(flowD, fd) } } // Stuff them into the database. + // TODO make this configurable to turn off. err = d.gis.Providers.PsqlProvider.PushStatsCase(run.OrgID, caseD) if err != nil { local.Logger.Error("FlowCaseStats: Error pushing case stats to database.", zap.Error(err)) @@ -177,4 +272,35 @@ func (d *Dispatcher) handleFlowCaseStats(RunId int, orders string) { local.Logger.Error("FlowCaseStats: Error pushing flow stats to database.", zap.Error(err)) } + if d.gis.ServiceContext.Telemetry { + o := gotel.GetOtelConfig() + if o != nil { + // Loop through flowD and add type and timestamp + for _, flowItem := range flowD { + if flowMap, ok := flowItem.(map[string]interface{}); ok { + flowMap["type"] = "flow" + flowMap["timestamp"] = time.Now().Unix() + // TODO not sure this is how I wnt the data to be sent. Same with case. + if jsonBytes, err := json.Marshal(flowMap); err == nil { + o.Logger.Info("flow_stats", "data", string(jsonBytes)) + } + } + } + + // Loop through caseD and add type and timestamp + for _, caseItem := range caseD { + if caseMap, ok := caseItem.(map[string]interface{}); ok { + caseMap["type"] = "case" + caseMap["timestamp"] = time.Now().Unix() + // Convert map to JSON for logging + if jsonBytes, err := json.Marshal(caseMap); err == nil { + o.Logger.Info("case_stats", "data", string(jsonBytes)) + } + + } + } + + } + } + } diff --git a/service/src/components/BugEditDialog.vue b/service/src/components/BugEditDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..31aa7bfc6ab858c7419fffc2b1eb76b41f92b796 --- /dev/null +++ b/service/src/components/BugEditDialog.vue @@ -0,0 +1,219 @@ + + + + diff --git a/service/src/components/CaseBugDialog.vue b/service/src/components/CaseBugDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..4107917b1b842f823825ddd65290e07c0e847920 --- /dev/null +++ b/service/src/components/CaseBugDialog.vue @@ -0,0 +1,339 @@ + + + + + + + diff --git a/service/src/components/CaseDialog.vue b/service/src/components/CaseDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..0c664a6694af4e69cc6c50112eb34417000362d0 --- /dev/null +++ b/service/src/components/CaseDialog.vue @@ -0,0 +1,423 @@ + + + + + + \ No newline at end of file diff --git a/service/src/components/CaseMilestoneDialog.vue b/service/src/components/CaseMilestoneDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..bc0fb4c15891d28168f1aad501878adbd31eca5f --- /dev/null +++ b/service/src/components/CaseMilestoneDialog.vue @@ -0,0 +1,87 @@ + + + diff --git a/service/src/components/ChangeContent.vue b/service/src/components/ChangeContent.vue new file mode 100644 index 0000000000000000000000000000000000000000..fbf84bc98ef3977729ff394f14aac3b55a096445 --- /dev/null +++ b/service/src/components/ChangeContent.vue @@ -0,0 +1,103 @@ + + + + + + \ No newline at end of file diff --git a/service/src/components/ChartCosts.vue b/service/src/components/ChartCosts.vue new file mode 100644 index 0000000000000000000000000000000000000000..16c525f466680df4591e7972ab5ba1d765a27604 --- /dev/null +++ b/service/src/components/ChartCosts.vue @@ -0,0 +1,361 @@ + + + + + diff --git a/service/src/components/ChartPerformance.vue b/service/src/components/ChartPerformance.vue new file mode 100644 index 0000000000000000000000000000000000000000..2550af8ea849af969f0c55b177680219c9ab1168 --- /dev/null +++ b/service/src/components/ChartPerformance.vue @@ -0,0 +1,322 @@ + + + + + diff --git a/service/src/components/DecorationChips.vue b/service/src/components/DecorationChips.vue new file mode 100644 index 0000000000000000000000000000000000000000..baa7444af90b10313aa9244076b51adda4922e13 --- /dev/null +++ b/service/src/components/DecorationChips.vue @@ -0,0 +1,102 @@ + + + + + \ No newline at end of file diff --git a/service/src/components/FilterTagsDialog.vue b/service/src/components/FilterTagsDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..de28fa351a84de23d7b08eca1217a8a8b6d705df --- /dev/null +++ b/service/src/components/FilterTagsDialog.vue @@ -0,0 +1,116 @@ + + + + + \ No newline at end of file diff --git a/service/src/components/HistoryDialog.vue b/service/src/components/HistoryDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..8b6d492e5214c027edbb4c9e475d483d996d2326 --- /dev/null +++ b/service/src/components/HistoryDialog.vue @@ -0,0 +1,108 @@ + + + + + + \ No newline at end of file diff --git a/service/src/components/LogsDialog.vue b/service/src/components/LogsDialog.vue index c18946f18fee299bafb09ff43470020aefad2435..72fa915519135dda21e64d7f19ba8f21b1dd170c 100644 --- a/service/src/components/LogsDialog.vue +++ b/service/src/components/LogsDialog.vue @@ -46,11 +46,9 @@ }" :style="{ paddingLeft: `${16 + item.depth * 20}px` }" > -
{{ item.displayText }}
-
@@ -65,9 +63,8 @@ :key="index" class="log-line py-1" > -
{{ line }}
-
+ @@ -140,117 +137,54 @@ const flattenedLogs = computed((): FlattenedLogItem[] => { const result: FlattenedLogItem[] = [] - const processValue = (value: any, depth: number = 0) => { - if (Array.isArray(value)) { - // Each array item gets "directive" label at depth 0 - value.forEach((item, index) => { - - // Process the directive's content - if (typeof item === 'object' && item !== null) { - processObject(item, depth + 1) - } else { - result.push({ - displayText: formatPrimitiveValue(item), - depth: depth + 1, - isDirective: false, - isStartFlow: false, - isStartCase: false, - isLogEntry: true - }) - } - }) - } else if (typeof value === 'object' && value !== null) { - processObject(value, depth) - } else { - result.push({ - displayText: formatPrimitiveValue(value), - depth, - isDirective: false, - isStartFlow: false, - isStartCase: false, - isLogEntry: true - }) - } - } - - const processObject = (obj: any, depth: number) => { - Object.entries(obj).forEach(([key, value]) => { - if (Array.isArray(value)) { - // Show the key name first, then process the array - const isStartFlow = key === 'start.flow' - const isStartCase = key === 'start.case' - - result.push({ - displayText: `${key}:`, - depth, - isDirective: false, - isStartFlow, - isStartCase, - isLogEntry: false - }) - processValue(value, depth) - } else if (typeof value === 'object' && value !== null) { - processObject(value, depth + 1) + try { + parsedLogs.value.forEach((item: any) => { + let depth = 0 + let id = false + let isf = false + let isc = false + let log = false + + // Safely check if message exists and is a string + const message = typeof item.message === 'string' ? item.message : String(item.message || '') + + if (message.startsWith("##")) { + depth = 2 + id = true + } else if (message.startsWith("start flow:")) { + depth = 0 + isf = true + } else if (message.startsWith("start case:")) { + depth = 1 + isc = true } else { - // For primitive values, show key: value - const isStartFlow = key === 'start.flow' - const isStartCase = key === 'start.case' - - if (key == 'directive') { - result.push({ - displayText: `${key}: ${formatPrimitiveValue(value)}`, - depth, - isDirective: true, - isStartFlow: false, - isStartCase: false, - isLogEntry: false - }) - - } else { - result.push({ - displayText: `${key}: ${formatPrimitiveValue(value)}`, - depth, - isDirective: false, - isStartFlow, - isStartCase, - isLogEntry: false - }) - } - + log = true + depth = 3 + item.message = `${item.timestamp || ''} ${item.type || ''} ${message}` } - }) - } - // Start processing from the root - if (Array.isArray(parsedLogs.value)) { - // Process each top-level array item directly - parsedLogs.value.forEach((item) => { - if (typeof item === 'object' && item !== null) { - processObject(item, 0) - } else { - result.push({ - displayText: formatPrimitiveValue(item), - depth: 0, - isDirective: false, - isStartFlow: false, - isStartCase: false, - isLogEntry: true - }) - } + result.push({ + displayText: message, + depth: depth, + isDirective: id, + isStartFlow: isf, + isStartCase: isc, + isLogEntry: log + }) }) - } else if (typeof parsedLogs.value === 'object' && parsedLogs.value !== null) { - processObject(parsedLogs.value, 0) - } else { + } catch (error) { + console.error('Error processing logs:', error) result.push({ - displayText: formatPrimitiveValue(parsedLogs.value), + displayText: "Error during log processing", depth: 0, isDirective: false, isStartFlow: false, isStartCase: false, - isLogEntry: true + isLogEntry: false }) } + return result }) @@ -260,18 +194,6 @@ const logLines = computed(() => { return props.logs.split('\n').filter(line => line.trim().length > 0) }) -const formatPrimitiveValue = (value: any): string => { - if (value === null || value === undefined) { - return 'null' - } - - if (typeof value === 'string') { - return value - } - - return String(value) -} - const closeDialog = () => { emit('update:modelValue', false) } diff --git a/service/src/components/ReportDialog.vue b/service/src/components/ReportDialog.vue new file mode 100644 index 0000000000000000000000000000000000000000..38ab44df82fa0b51d2dc6e3025a898474d0b2549 --- /dev/null +++ b/service/src/components/ReportDialog.vue @@ -0,0 +1,323 @@ + + + + + \ No newline at end of file diff --git a/service/src/components/SnackbarNotification.vue b/service/src/components/SnackbarNotification.vue new file mode 100644 index 0000000000000000000000000000000000000000..d337a9676b3ad9e952cdab822a91bdfa91150c72 --- /dev/null +++ b/service/src/components/SnackbarNotification.vue @@ -0,0 +1,58 @@ + + + diff --git a/service/src/components/nav.ts b/service/src/components/nav.ts index d03a9035684b425220d1d63cac20b699fdf6615d..646f197ccbf00cb2d3433707f2ed049d2fb045d5 100644 --- a/service/src/components/nav.ts +++ b/service/src/components/nav.ts @@ -38,3 +38,7 @@ export const navigateToRun = () => { export const navigateToSettings = () => { router.push('/settings') } + +export const navigateToCaseData = () => { + router.push('/casedata') +} diff --git a/service/src/components/status.ts b/service/src/components/status.ts index 01756ef5fb96455c924753a74898f854a7aef9a4..c1eba789eb91f820b1a63bff8dabadbef60c1089 100644 --- a/service/src/components/status.ts +++ b/service/src/components/status.ts @@ -1,6 +1,11 @@ // WWWHerd (c) 2025 Ginfra Project. All rights Reserved. Source licensed under Apache License Version 2.0, January 2004 // Get run status text +// ------------------------------------------------------------------------------------------------------- +// - RUN + +import {ref} from "vue"; + export const RUN_STATUS = { NEW: 1, WAITING: 2, @@ -81,4 +86,137 @@ export const getRunStatusColor = (status: number) => { case RUN_STATUS.SKIPPED: return 'grey' default: return 'grey' } -} \ No newline at end of file +} + +// ------------------------------------------------------------------------------------------------------- +// - BUG + +export const BUG_STATUS = { + NEW: 1, + OPEN: 2, + FIXED: 3, + DUPLICATE: 4, + WONT_FIX: 5 +} as const; + + +export const getBugStatusText = (status: number) => { + switch (status) { + case BUG_STATUS.NEW: return 'New' + case BUG_STATUS.OPEN: return 'Open' + case BUG_STATUS.FIXED: return 'Fixed' + case BUG_STATUS.DUPLICATE: return 'Duplicate' + case BUG_STATUS.WONT_FIX: return 'Wont Fix' + default: return 'Unknown' + } +} +export const statusOptions = [ + '', + 'New', + 'Open', + 'Fixed', + 'Duplicate', + 'Wont Fix' +] + +export const BUG_TYPE = { + BUG: 1, + FEATURE: 2, + PERFORMANCE: 3, + COST: 4, + OBSOLETE: 5 +} as const; + +export const getBugTypeText = (status: number) => { + switch (status) { + case BUG_TYPE.BUG: return 'Bug' + case BUG_TYPE.FEATURE: return 'Feature' + case BUG_TYPE.PERFORMANCE: return 'Performance' + case BUG_TYPE.COST: return 'Cost' + case BUG_TYPE.OBSOLETE: return 'Obsolete' + default: return 'Unknown' + } +} + +export const typeOptions = [ + '', + 'Bug', + 'Feature', + 'Performance', + 'Cost', + "Obsolete" +] + +export const BUG_PRIORITY = { + CRITICAL: 1, + HIGH: 2, + MEDIUM: 3, + LOW: 4, + INFORMATIONAL: 5 +} as const; + +export const getBugPriorityText = (priority: number) => { + switch (priority) { + case BUG_PRIORITY.CRITICAL: return 'Critical' + case BUG_PRIORITY.HIGH: return 'High' + case BUG_PRIORITY.MEDIUM: return 'Medium' + case BUG_PRIORITY.LOW: return 'Low' + case BUG_PRIORITY.INFORMATIONAL: return 'Informational' + default: return 'Unknown' + } +} + +export const priorityOptions = [ + '', + 'Critical', + 'High', + 'Medium', + 'Low', + "Informational" +] + +export const getPriorityColor = (priority: number) => { + switch (priority) { + case BUG_PRIORITY.CRITICAL: return 'red' + case BUG_PRIORITY.HIGH: return 'orange' + case BUG_PRIORITY.MEDIUM: return 'yellow' + case BUG_PRIORITY.LOW: return 'brown' + case BUG_PRIORITY.INFORMATIONAL: return 'blue' + default: return 'gray' + } +} + +export const bugFilters = ref({ + type: [] as number[], + status: [1, 2] as number[], + priority: [] as number[] +}) + +// TODO these should be merged with the options above. + +// Define filter options - you may need to adjust these based on your actual enum values +export const typeFilterOptions = [ + { title: 'Bug', value: 1 }, + { title: 'Feature', value: 2 }, + { title: 'Performance', value: 3 }, + { title: 'Cost', value: 4 }, + { title: 'Obsolete', value: 5 } +] + +export const statusFilterOptions = [ + { title: 'New', value: 1 }, + { title: 'Open', value: 2 }, + { title: 'Fixed', value: 3 }, + { title: 'Duplicate', value: 4 }, + { title: 'Wont Fix', value: 5 } +] + +export const priorityFilterOptions = [ + { title: 'Critical', value: 1 }, + { title: 'High', value: 2 }, + { title: 'Medium', value: 3 }, + { title: 'Low', value: 4 }, + { title: 'Informational', value: 5 } +] + + diff --git a/service/src/plugins/vuetify.js b/service/src/plugins/vuetify.js index 60688ae3202a7770d226a5f5f05098e5d61c35df..ecd4acfcd6e96bd7dff2752d756bf58fd48f599e 100644 --- a/service/src/plugins/vuetify.js +++ b/service/src/plugins/vuetify.js @@ -20,10 +20,13 @@ const topTheme = { dialogBackground: '#a4f1e5', buttonLive: '#0259fb', buttonDisabled: '#6e6d6d', - appbar: '#abc7e6', + bug: '#f6917e', + history: '#9a6d29', + appbar: '#bed7f3', appbutton: '#e0e4fd', softbutton: '#5061ca', hibutton: '#1f9f5b', + clickable: '#111112', info: '#0a0a0a', error: '#ff0000', ok: '#00aa00', diff --git a/service/src/router.ts b/service/src/router.ts index d95e245c4821954a0621f1c0e140bb3b6ecbfc07..821d5a1a31e7be0f763e8ad8492041f18cc90e7b 100644 --- a/service/src/router.ts +++ b/service/src/router.ts @@ -37,6 +37,12 @@ const router = createRouter({ name: 'admin', component: () => import('./routes/admin/AdminView.vue'), meta: { requiresAuth: true } + }, + { + path: '/casedata', + name: 'casecada', + component: () => import('./routes/casedata/CaseData.vue'), + meta: { requiresAuth: true } } ] diff --git a/service/src/routes/admin/AdminView.vue b/service/src/routes/admin/AdminView.vue index 0a7adf26c6188c2277772757b4eb786e91ea33be..496e0cf090ad37cb36ad1959b1006f21150cbde4 100644 --- a/service/src/routes/admin/AdminView.vue +++ b/service/src/routes/admin/AdminView.vue @@ -14,6 +14,7 @@ mdi-account-multipleUsers + mdi-keyTokens mdi-robot-outlineProviders mdi-desktop-classicSystem @@ -109,6 +110,77 @@ + + + + Tokens + + + + {{ tokensError }} + + + Close + + + Add Token + + + + + + + Name + Created + Expires + Status + Actions + + + + + + mdi-key + + {{ token.name }} + {{ formatDate(token.create_date) }} + {{ formatDate(token.expire_date) }} + + + {{ isExpired(token.expire_date) ? 'Expired' : 'Active' }} + + + + + + + + + + No tokens found. + + + @@ -210,6 +282,15 @@ > Import + + + Wipe + @@ -405,6 +486,176 @@ + + + + Wipe Data + + Are you sure you want to wipe data? Cases, flows, designs, runs and associated data will be deleted forever. Meta data, accounts tags, and other similar stuff will not be deleted. + + + + Cancel + Yes + + + + + + + + Add New Token + +
+ + + + + +
+ +
+ + Important: Copy this token now! You will never be able to see it again. + + + + + + Click the copy icon to copy the token to your clipboard. + +
+
+ + + + mdi-close + Cancel + + + mdi-check + Create Token + + + mdi-check + I've Copied the Token + + +
+
+ + + + + Delete Token + + Are you sure you want to delete the token "{{ tokenToDelete?.name }}"? This action cannot be undone. + + + + + Cancel + + + Delete + + + + + + + + + + Database Wipe Authorization + + +
+ + + + This operation will permanently delete all data. Please enter the privileged database password to continue. + +
+ + + + Cancel + + + Confirm Wipe + + +
+
+ + - - {{ snackbar.text }} - - + diff --git a/service/src/routes/main/MainView.vue b/service/src/routes/main/MainView.vue index ae7283e336024f8b867bedaf952ed330bc136bea..ee644a2d4481bf2ab35ded19e979473549ba9f79 100644 --- a/service/src/routes/main/MainView.vue +++ b/service/src/routes/main/MainView.vue @@ -54,11 +54,9 @@ -
- @@ -110,9 +108,30 @@ + + + + + + + + + @@ -151,9 +170,7 @@