Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
evoprompt
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Requirements
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Locked files
Deploy
Releases
Package Registry
Model registry
Operate
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
Repository analytics
Code review analytics
Issue analytics
Insights
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Terms and privacy
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Grießhaber Daniel
evoprompt
Commits
e7ccb8aa
Commit
e7ccb8aa
authored
10 months ago
by
Grießhaber Daniel
Browse files
Options
Downloads
Plain Diff
Merge branch 'master' into integrate-frontend
parents
2c0102c4
d819c1e6
Loading
Loading
Loading
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
api/main.py
+3
-2
3 additions, 2 deletions
api/main.py
task.py
+2
-4
2 additions, 4 deletions
task.py
with
5 additions
and
6 deletions
api/main.py
+
3
−
2
View file @
e7ccb8aa
from
contextlib
import
asynccontextmanager
from
api.optimization
import
MultiProcessOptimizer
from
api.routers
import
runs
from
fastapi
import
BackgroundTasks
,
FastAPI
,
Request
,
Response
from
fastapi.staticfiles
import
StaticFiles
from
requests
import
request
as
make_request
from
api.optimization
import
MultiProcessOptimizer
from
api.routers
import
runs
# see https://github.com/tiangolo/fastapi/issues/3091#issuecomment-821522932 and https://github.com/encode/starlette/issues/1094#issuecomment-730346075 for heavy-load computation
DEBUG
=
True
...
...
This diff is collapsed.
Click to expand it.
task.py
+
2
−
4
View file @
e7ccb8aa
...
...
@@ -5,15 +5,13 @@ from functools import lru_cache
from
statistics
import
mean
from
typing
import
Union
from
cli
import
argument_parser
from
datasets
import
Dataset
,
load_dataset
from
evaluate
import
load
as
load_metric
from
llama_cpp
import
LlamaGrammar
,
deque
from
torch.utils
import
data
from
tqdm
import
tqdm
from
cli
import
argument_parser
from
models
import
Llama2
,
LLMModel
,
OpenAI
from
opt_types
import
ModelUsage
from
tqdm
import
tqdm
from
utils
import
log_calls
,
logger
SYSTEM_MESSAGE
=
"""
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment