Skip to content

Commit

Permalink
- Enhanced response generation
Browse files Browse the repository at this point in the history
- Prompt optimizers added - *[code, shell_command]*
- Prompt optimizers added - *Console*
- Clear console  - *Console*
- Increase version count - v0.0.2
  • Loading branch information
Simatwa committed Dec 30, 2023
1 parent 431a317 commit d05ab05
Show file tree
Hide file tree
Showing 9 changed files with 167 additions and 22 deletions.
10 changes: 9 additions & 1 deletion docs/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,12 @@

**What's new?**

- Initial release.
- Initial release.

## v0.0.2

**What's new?**
- Enhanced response generation
- Prompt optimizers added - *[code, shell_command]*
- Prompt optimizers added - *Console*
- Clear console - *Console*
38 changes: 29 additions & 9 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
<h1 align="center"> tgpt2 </h1>

<p align="center">
<!--
<a href="https://github.com/Simatwa/tgpt2/actions/workflows/python-test.yml"><img src="https://github.com/Simatwa/tgpt2/actions/workflows/python-test.yml/badge.svg" alt="Python Test"/></a>
<a href="LICENSE"><img alt="License" src="https://img.shields.io/static/v1?logo=GPL&color=Blue&message=GNUv3&label=License"/></a>
<a href="https://pypi.org/project/tgpt2"><img alt="PyPi" src="https://img.shields.io/static/v1?logo=pypi&label=Pypi&message=v0.0.1&color=green"/></a>
-->
<a href="LICENSE"><img alt="License" src="https://img.shields.io/static/v1?logo=GPL&color=Blue&message=MIT&label=License"/></a>
<a href="https://pypi.org/project/tgpt2"><img alt="PyPi" src="https://img.shields.io/static/v1?logo=pypi&label=Pypi&message=v0.0.2&color=green"/></a>
<a href="https://github.com/psf/black"><img alt="Black" src="https://img.shields.io/static/v1?logo=Black&label=Code-style&message=Black"/></a>
<a href="#"><img alt="Passing" src="https://img.shields.io/static/v1?logo=Docs&label=Docs&message=Passing&color=green"/></a>
<a href="https://github.com/Simatwa/tgpt2/actions/workflows/python-package.yml"><img src="https://github.com/Simatwa/tgpt2/actions/workflows/python-package.yml/badge.svg"/></a>
Expand All @@ -21,11 +23,12 @@ AI for all
</p>

```python
from tgpt2 import TGPT
bot = TGPT()
resp = bot.chat('<Your prompt>')
print(resp)
# Output : How may I help you.
>>> import tgpt2
>>> bot = tgpt2.TGPT()
>>> bot.chat('Hello there')
" Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?"
>>>

```

This project allows you to interact with AI ([LLaMA](https://ai.meta.com/llama/)) without API Key.
Expand Down Expand Up @@ -58,7 +61,7 @@ pip install git+https://github.com/Simatwa/tgpt2.git

```
git clone https://github.com/Simatwa/tgpt2.git
cd tgp2
cd tgpt2
pip install .
```

Expand Down Expand Up @@ -147,10 +150,27 @@ for value in resp:
```


</details>

<details>

<summary>

- To get better feedback, you can make use of [optimizers](tgpt2/utils.py) using parameter `optimizer` with values *(code or system_command)*

</summary>

```python
from tgpt2 import TGPT
bot = TGPT()
resp = bot.ask('<Your Prompt>', optimizer='code')
print(resp)
```

</details>


> **Note** : At the time of wriiting this, Chatting conversational is not supported
> **Note** : At the time of writing this, Chatting conversationally is not supported
## Acknowledgements

Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

setup(
name="tgpt2",
version="0.0.1",
version="0.0.2",
license="MIT",
author="Smartwa",
maintainer="Smartwa",
Expand Down
6 changes: 6 additions & 0 deletions tests/test_tgpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,17 @@ def test_chat_stream(self):
for value in resp:
self.assertIsInstance(value, str)

def test_optimizer_usage(self):
"""Code optimization"""
resp = self.bot.chat(self.prompt,optimizer='code')
self.assertIsInstance(resp, str)

def test_last_response(self):
"""Last response availability"""
self.bot.chat(self.prompt)
self.assertIsInstance(self.bot.last_response, dict)



if __name__ == "__main__":
unittest.main()
2 changes: 1 addition & 1 deletion tgpt2/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from .tgpt2 import TGPT
from .imager import Imager

__version__ = "0.0.1"
__version__ = "0.0.2"
__author__ = "Smartwa"
__repo__ = "https://github.com/Simatwa/tgpt2"

Expand Down
34 changes: 32 additions & 2 deletions tgpt2/console.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
import cmd
import logging
import os
import sys
import clipman
import re
import rich
Expand All @@ -16,6 +17,7 @@
from rich.live import Live
from rich.prompt import Prompt
from typing import Iterator
from tgpt2.utils import Optimizers

getExc = lambda e: e.args[1] if len(e.args) > 1 else str(e)

Expand Down Expand Up @@ -211,7 +213,7 @@ def do_settings(self, line):
"Enter code_theme", choices=rich_code_themes, default=self.code_theme
)
self.prettify = click.confirm(
"Prettify markdown response", default=self.prettify
"\nPrettify markdown response", default=self.prettify
)
self.color = click.prompt("Response stdout font color", default=self.color)

Expand Down Expand Up @@ -294,6 +296,30 @@ def do_with_copied(self, line):
if click.confirm("Do you wish to proceed"):
self.default(issued_prompt)

@busy_bar.run()
def do_code(self, line):
"""Enhance prompt for code generation
usage :
code <Code description>
"""
self.default(Optimizers.code(line))

@busy_bar.run()
def do_shell(self, line):
"""Enhance prompt for system command (shell) generation
Usage:
shell <Action to be accomplished>
"""
self.default(Optimizers.shell_command(line))
if click.confirm("Do you wish to run the command(s) generated in your system"):
self.do_sys(self.bot.get_message(self.bot.last_response))

def do_clear(self,line):
"""Clear console"""
sys.stdout.write("\u001b[2J\u001b[H")
sys.stdout.flush()


@busy_bar.run()
def default(self, line):
"""Chat with ChatGPT"""
Expand Down Expand Up @@ -327,7 +353,11 @@ def default(self, line):

def do_sys(self, line):
"""Execute system commands
shortcut [./<command>]
shortcut [./<command>]
Usage:
sys <System command>
or
./<System command>
"""
os.system(line)

Expand Down
6 changes: 3 additions & 3 deletions tgpt2/imager.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ def generate(self, prompt: str) -> dict:
}

response = session.post(self.image_gen_endpoint, json=payload)
#out = lambda data: print(json.dumps(dict(data), indent=4))
#out(response.headers)
# out = lambda data: print(json.dumps(dict(data), indent=4))
# out(response.headers)
try:
pass
#out(response.json())
# out(response.json())
except:
pass
return response.text
Expand Down
32 changes: 27 additions & 5 deletions tgpt2/tgpt2.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import requests
import json
from .utils import Optimizers

session = requests.Session()

Expand Down Expand Up @@ -41,15 +42,27 @@ def __init__(
"x-brave-key": brave_key,
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:99.0) Gecko/20100101 Firefox/110.0",
}
self.__available_optimizers = (
method
for method in dir(Optimizers)
if callable(getattr(Optimizers, method)) and not method.startswith("__")
)
session.headers.update(self.headers)

def ask(self, prompt: str, stream: bool = False, raw: bool = False) -> dict:
def ask(
self,
prompt: str,
stream: bool = False,
raw: bool = False,
optimizer: str = None,
) -> dict:
"""Chat with AI
Args:
prompt (str): Prompt to be sent
stream (bool, optional): Flag for streaming response. Defaults to False.
raw (bool, optional): Stream back raw response as received
optimizer (str, optional): Prompt optimizer name - `[code, shell_command]`
Returns:
dict : {}
```json
Expand All @@ -64,11 +77,19 @@ def ask(self, prompt: str, stream: bool = False, raw: bool = False) -> dict:
}
```
"""
if optimizer:
if optimizer in self.__available_optimizers:
prompt = getattr(Optimizers, optimizer)(prompt)
else:
raise Exception(
f"Optimizer is not one of {self.__available_optimizers}"
)

session.headers.update(self.headers)
payload = {
"max_tokens_to_sample": self.max_tokens_to_sample,
"model": self.model,
"prompt": prompt,
"prompt": f"[INST] {prompt} [/INST]",
"self.stop_sequence": self.stop_sequences,
"stream": stream,
"top_k": self.top_k,
Expand Down Expand Up @@ -117,21 +138,22 @@ def for_non_stream():

return for_stream() if stream else for_non_stream()

def chat(self, prompt: str, stream: bool = False) -> str:
def chat(self, prompt: str, stream: bool = False, optimizer: str = None) -> str:
"""Generate response `str`
Args:
prompt (str): Prompt to be sent
stream (bool, optional): Flag for streaming response. Defaults to False.
optimizer (str, optional): Prompt optimizer name - `[code, shell_command]`
Returns:
str: Response generated
"""

def for_stream():
for response in self.ask(prompt, stream=True):
for response in self.ask(prompt, True, optimizer):
yield self.get_message(response)

def for_non_stream():
return self.get_message(self.ask(prompt, stream=False))
return self.get_message(self.ask(prompt, False, optimizer))

return for_stream() if stream else for_non_stream()

Expand Down
59 changes: 59 additions & 0 deletions tgpt2/utils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import os
import platform
import subprocess


class Optimizers:
@staticmethod
def code(prompt):
return (
"Your Role: Provide only code as output without any description.\n"
"IMPORTANT: Provide only plain text without Markdown formatting.\n"
"IMPORTANT: Do not include markdown formatting."
"If there is a lack of details, provide most logical solution. You are not allowed to ask for more details."
"Ignore any potential risk of errors or confusion.\n\n"
f"Request: {prompt}\n"
f"Code:"
)

@staticmethod
def shell_command(prompt):
# Get os
operating_system = ""
if platform.system() == "Windows":
operating_system = "Windows"
elif platform.system() == "Darwin":
operating_system = "MacOS"
elif platform.system() == "Linux":
try:
result = (
subprocess.check_output(["lsb_release", "-si"]).decode().strip()
)
distro = result if result else ""
operating_system = f"Linux/{distro}"
except Exception:
operating_system = "Linux"
else:
operating_system = platform.system()

# Get Shell
shell_name = "/bin/sh"
if platform.system() == "Windows":
shell_name = "cmd.exe"
if os.getenv("PSModulePath"):
shell_name = "powershell.exe"
else:
shell_env = os.getenv("SHELL")
if shell_env:
shell_name = shell_env

return (
"Your role: Provide only plain text without Markdown formatting. "
"Do not show any warnings or information regarding your capabilities. "
"Do not provide any description. If you need to store any data, "
f"assume it will be stored in the chat. Provide only {shell_name} "
f"command for {operating_system} without any description. If there is "
"a lack of details, provide most logical solution. Ensure the output "
"is a valid shell command. If multiple steps required try to combine "
f"them together. Prompt: {prompt}\n\nCommand:"
)

0 comments on commit d05ab05

Please sign in to comment.