Skip to content

idk wth is happening help #713

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Kilgorio opened this issue Apr 2, 2023 · 8 comments
Closed

idk wth is happening help #713

Kilgorio opened this issue Apr 2, 2023 · 8 comments

Comments

@Kilgorio
Copy link

Kilgorio commented Apr 2, 2023

PS C:\Users\Admin> cd D:\Software\GPT4ALL\llama.cpp
PS D:\Software\GPT4ALL\llama.cpp> make
process_begin: CreateProcess(NULL, uname -s, ...) failed.
Makefile:2: pipe: No error
process_begin: CreateProcess(NULL, uname -p, ...) failed.
Makefile:6: pipe: No error
process_begin: CreateProcess(NULL, uname -m, ...) failed.
Makefile:10: pipe: No error
'cc' is not recognized as an internal or external command,
operable program or batch file.
'head' is not recognized as an internal or external command,
operable program or batch file.
I llama.cpp build info:
I UNAME_S:
I UNAME_P:
I UNAME_M:
I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2
I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function
I LDFLAGS:
I CC:
I CXX:

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2 -c ggml.c -o ggml.o
process_begin: CreateProcess(NULL, cc -I. -O3 -DNDEBUG -std=c11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wdouble-promotion -Wshadow -Wstrict-prototypes -Wpointer-arith -Wno-unused-function -mfma -mf16c -mavx -mavx2 -c ggml.c -o ggml.o, ...) failed.
make (e=2): The system cannot find the file specified.
make: *** [Makefile:229: ggml.o] Error 2

@MillionthOdin16
Copy link

This is definitely not proper format. Look at the issue template, look at other issues, put in effort to explaining your issue before expecting others to pick up your lack.

@Kilgorio
Copy link
Author

Kilgorio commented Apr 2, 2023

I looked at it and I said

"I don't have time for that"

@slaren
Copy link
Member

slaren commented Apr 2, 2023

You need to use cmake to build on windows.

@Kilgorio
Copy link
Author

Kilgorio commented Apr 2, 2023

but how

@Kilgorio
Copy link
Author

Kilgorio commented Apr 2, 2023

I am only doing this to get tokenizer.model so I can convert gpt4all so it can work with kobold ai and with that it can work with tavernAI
I think I will get tokenizer.model trough this cause currently I don't have it

@slaren
Copy link
Member

slaren commented Apr 2, 2023

The tokenizer.model file is part of the original llama models distribution and you won't get that by compiling this project. We should include build instructions for windows in the README, but for now you can use one of the pre-compiled binaries available at https://github.com/ggerganov/llama.cpp/tags

@slaren slaren closed this as completed Apr 2, 2023
@Kilgorio
Copy link
Author

Kilgorio commented Apr 2, 2023

I still can't find it

@slaren
Copy link
Member

slaren commented Apr 2, 2023

@Kilgorio unfortunately I cannot tell you where to find the original llama models as that is explicitly against the policy of this project. You will have to look for that elsewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants