iOS 빌드는 다 되고 실행하는 데 통신이 안돼서 Xcode 로그를 보니 아래와 같은 에러 로그가 보인다 있다.
iOS에서 이제는 https를 강제하기 때문에 http URL로 통신하려고 하면 Runtime에 Error를 뱉는다.
2020-12-28 20:02:53.378574+0900 project[46312:8363659]
You are using download over http.
Currently Unity adds NSAllowsArbitraryLoads to Info.plist to simplify transition,
but it will be removed soon. Please consider updating to https.
2020-12-28 20:02:53.380318+0900 project[46312:8363628]
App Transport Security has blocked a cleartext HTTP (http://) resource load since it is insecure.
Temporary exceptions can be configured via your app's Info.plist file.
2020-12-28 20:02:53.380351+0900 project[46312:8363628]
Cannot start load of Task <9D5615C0-5B46-4144-9851-1EB6BFDEAF4A>.
<0> since it does not conform to ATS policy
In this Unity Input System tutorial, you’ll learn how to convert player input in your existing projects from the old Input Manager to the new Input System.
Version
C# 7.3, Unity 2020.1, Unity
Handling input is a pillar of creating a successful game. Input devices help player characters perform actions inside the game such as walking, jumping and climbing.
Recently, many new platforms have come out, introducing more Unity input devices. These include touch screens, VR controllers and gamepads from different game consoles. If you want your game to support different platforms, you need to write code to handle the logic for different devices. The more platforms you support, the more complex your code will be.
Luckily, there’s a new UnityInput Systemthat helps developers deal with this situation, making input-related code easier to manage.
In this tutorial, you’ll learn:
The features of the new Input System.
How the new system works.
How to migrate apps to the new Input System from the old Input Manager.
The materials for this tutorial were built inUnity version 2020.1. You can get this version of Unity from theUnity websiteor install it with the Unity Hub.
Note: Although this tutorial is for beginners, you’ll need to have some basic knowledge of Unity development and how to work with Unity Editor to complete it. If you’re new to Unity development, check out our tutorial onGetting Started In Unity.
Getting Started
First, download the starter project for this tutorial using theDownload Materialsbutton at the top or bottom of this page. Unzip its contents and openNewUnityInputSystem-Starterin Unity.
After the project loads, you’ll see theRWfolder in the Project Window:
Take a look at the organization of the folder:
Fonts: Fonts used in the scene.
Materials: Materials for the scene.
Models: 3D meshes for the player character and game environments.
Prefabs: Pre-built components composed of Scripts and Models.
Scenes: The game scene.
Scripts: Scripts with game logic.
Settings: The settings file, which is where you’ll put the input settings.
Shaders: Shaders for special effects like the player’s shadow.
Textures: The graphics used by Materials and UIs.
The starter project is a simple platformer game. You control the player character by moving around and jumping to collect coins.
The game is ready to play. OpenGameScenefrom theScenesfolder and clickPlayto try the game for yourself!
Currently, theMoveandJumpcontrols use the old Unity Input Manager. You’ll learn how to use the new Input System to replace the old system later in the tutorial.
What’s New in the Unity Input System
Before diving into the code, take a look at the new Input System’s features.
Simpler Action and Binding Architecture
The old input system checked input from different devices every frame to determine whether players took an action.
The following code, which supports both gamepads and keyboards, is an example of the old way of doing things:
The code usesif-elsebranching to handle support for different devices and their associated actions.
The newInput Systemseparates device input from code actions. That means you only have to handle the actions the players trigger. You don’t need to know which device the player is using or which button they’re clicking.
An input event in the new system is called anaction, while the mapping between an action and an input device is abinding.
Gathering Information With the Input Debug Tool
The Input System provides you with a new tool calledInput Debugger. Open it by selectingWindow ▸ Analysis ▸ Input Debuggerfrom the menu.
The Input Debugger helps developers by gathering the following information in one place:
The state of the Input System, including:
Device: Information about the connected devices.
Layout: Which controls those devices provide.
Setting: The configuration of the input system.
It also provides real-time information about a specific device. Open this by double-clicking the device from the device list in the Input Debugger window.
Here’s a demo of the Input Debugger in action:
Feel free to keep the Input Debugger open while you work through the tutorial.
Support for Multiple Devices and Platforms
With the increased flexibility from the Input System’s new architecture, Unity can support many different input devices, including:
Keyboard
Mouse
Pen
TouchScreen
Sensor
Joystick
GamePad
Note: The Input System also supports devices that implement theUSB HIDspecification. For more details, check out Unity’sSupported Devices Documentation.
Understanding the New Input System
The new Input System has four building blocks that funnel events from player devices to your game code:
Input Action Assets: A settings file that contains the properties of the actions and their associated bindings.
Actions: Actions define the logical meanings of the input. You can find that information in the Input Action Assets.
Bindings: Bindings describe the connection between an action and the input device controls. You’ll find this information in the Input Action Assets, too.
PlayerInput: PlayerInput is a script that manages and links action events to the corresponding code logic.
Sometimes it’s easier to understand a new workflow if you can visualize it, so take a look at the image below:
Break this down into its simplest steps:
First, the Unity Engine collects information from theconnected devicesand sends correspondingevents, like a button click, to theInput System.
TheInput Systemthen translates thoseeventsintoactions, based on theactionandbindinginformation stored in theInput Action Assets.
It then passes theactionsto thePlayerInputscript, which invokes the corresponding methods.
Now that you know a little more about how the Input System works, you’ll use it to control the game character in the coming sections.
Installing the New Input System
The first thing you’ll do is install the newInput System package. The standardUnityinstallation doesn’t include it.
OpenWindow ▸ Package Managerin the menu bar. Make sure that you selectUnity Registryin thePackagesdropdown, if you haven’t already.
FindInput Systemon the list. Select it and clickInstall.
Creating an Input Action Asset
Once you’ve installed the Input System package, you’ll create an Input Action Asset to store the settings for your actions and bindings.
Open theProjectwindow and selectSettingsfromRW. Right-click, selectCreate ▸ Input Actionsand rename it toMyControl.
Setting up the Action Editor
Double-clickMyControlinSettingsto open theAction Editor, which helps you manipulate actions and control bindings.
Since this is a new window, take a look at the sections:
Action Maps: Groups of actions that occur in the game. You can group actions for different purposes, like player, gameplay or UI.
Actions: The list of actions and bindings associated with the selected Action Map. In this panel, you create, modify or delete actions and bindings.
Properties: Edit the action or binding properties in this panel, such the type of action and the controls you associated with the binding.
Save Assets: This is a very important function: You must clickSave Assetafter making any changes to theInput Action Asset. If you forget to save, the setting won’t work. Thus, you won’t see the expected result and may think there’s a bug in the code.
You can switch onAuto Saveto prevent the problem, but it’s quite slow.
Now you’re ready to create your first action, theJumpaction.
Creating a Jump Action
First, open theAction Editorand click the+icon in theActionMapto create a newAction Map. Rename it from the default,New Action Map, toPlayer.
Then, in theActionpanel, double-clickNew Actionand rename it to a meaningful name:Jump.
Finally, you need to add a binding to the Jump action. You’ll bind theSpacebarandLeft Mouse Buttonto this action by following these steps:
Select theJumpaction, click the+icon and selectAdd Binding.
Click the new binding item,<No binding>.
Click thePathfield in theBinding propertiespanel.
TypeSpacebar Keyboardand selectSpace [Keyboard]to create the binding for theSpacebar.
Repeat steps 1–3 to create another binding for theLeft Mouse Button.
TypeLeft Buttonin thePathfield and selectLeft Button [Mouse]to create the binding.
Congratulations, you’ve now associated the Jump action with the Spacebar on the keyboard and the left button on the mouse.
Now to hook up those actions with your code!
Implementing the Jump Logic
First of all, you need to remove the old input logic from the project. OpenPlayer.csand navigate to theUpdate()method.
As you can see, the current code triggers the animation updates, then it checks if the player has pressed the space bar in order to start a jump.
Now that the Jump action and its control bindings are ready, the next thing to do is link the action to the code.
Linking the Jump Action to the Code
Start by deleting the code inUpdateto remove the implementation of the old Input Manager so you can add Jump logic using the new Input System.Updatewill now only control the animations.
voidUpdate() { UpdateAnimation(); }
Save the script and go back to the editor. Select thePlayerobject in the Hierarchy and add aPlayerInputcomponent from the Inspector.
Next, you’ll dragMyControltoPlayerInput’s Actions. Make sure to set theDefault MaptoPlayer.
Finally, openPlayer.csand add a new method calledOnJump()with the following code:
publicvoidOnJump() { HandleJump(); }
You’ve associated this method with the Jump action by using this pattern to name it:public void On[Action Name Goes Here]().
For example, the Jump action invokesOnJump(), while the Attack action invokesOnAttack().
ClickSave Assetin theAction Editorand run the game. Now you can use the SpaceBar or the left mouse button to make the player character jump. It’s really that easy!
Creating the Move Action
You’ve learned how to use the Input System to create a Jump action. Next up is theMoveaction!Moveis similar toJump, but it has a few key differences.
For example, the Jump action is a simple trigger event, while the Move action is an event thatcarries values: the movement direction, which comes from user input.
Again, you need to create the action and its binding. Start by going toAction Editor(double clickMyControlif you lost the window) and click the+icon in theActions panelto create a new action. Rename it toMove.
Next, open theAction properties panel, changeAction TypetoValueandControl TypetoVector 2.
Finally, remove<No Binding>by right-clicking and selectingDelete.
Now, you need to create the Move action’s bindings.
First, you’ll click the+icon in the header of the Move action. Then, selectAdd 2D Vector Composite, which will create four binding items corresponding to the up, down, left and right directions.
Now, you’ll set the path of each binding as follows:
Up: Up Arrow [Keyboard]
Down: Down Arrow [Keyboard]
Left: Left Arrow [Keyboard]
Right: Right Arrow [Keyboard]
Don’t forget to save the asset in theAction Editor!
Implementing the Move Logic
Before adding new movement logic, you need to remove the implementation of the old Unity input.
Note thatFixedUpdate()is called inevery fixed frame-rate frame.
Now, break this down:
Input.GetAxisRawreturns the value ofAxis.Input.GetAxisRaw("Horizontal")gives the value of theX-Axis, whileInput.GetAxisRaw("Vertical")gives the value ofY-Axis.
These two values define the movement vectormoveVec, which you use to control the direction of the player movement.
The logic of the player character’s behavior while it’son the ground.
The logic of the player character’s behavior while it’sjumping.
Now, delete all the code prior to theifstatement to remove the old input logic. Add the following code above the class definition:
using UnityEngine.InputSystem;
This allows you to access values from the newInput System.
When a player presses theUp,Down,LeftorRightkeys, it passes aMoveaction to this method, along with the values. Here’s how the key presses affect the values:
Up: (0, 1)
Down: (0, -1)
Left: (-1, 0)
Right: (1, 0)
No Key: (0, 0)
Up and Left: (1, -1)
InputValueis a new type you may not know. This class has aGet\()method that you can use to access its values. In this instance, you want the 2D Vector Composite you set in the binding to calculate the movement vector.
ClickPlayto test the logic.
Handling Actions
The new Input System provides four ways to handle action events.
In this tutorial, you used theSendMessagesapproach. You can change this option in theBehaviorfield in thePlayerInputcomponent.
SendMessageandBroadcastMessageare the simplest ways to handle actions. When you use these two options, the system invokes the method with a name matching the name of the action.
For example, in this tutorial, the Jump action invokesOnJump()and the Move action invokesOnMove().
BroadcastMessageis similar toSendMessage, except it can invoke the methods onany child GameObject. These two options are easy to use because you don’t need to configure anything to use them.
Using Invoke Unity Events
When usingInvoke Unity Events, you configure the action much as you’d configure a button click inUnity UI.
This approach is more flexible, letting you use different methods in different objects. Those GameObjects don’t even need to have thePlayerInputcomponent.
Using Invoke C# Events
This approach is as flexible asInvoke Unity Events. You can define the methods you want to use instead of using methods with a specific name. However, if you use this approach, you need to write code to control which methods to invoke.
Gets thePlayerInputcomponent and registers the method toonActionTriggered.
Controls which method to call for different actions.
Using the Update Cycle of the New Input System
In the old Unity Input Manager, you checked the input in every frame usingUpdate(). In the new Input System, you may wonder when actions are being sent, and if they’re sent before everyUpdate().
The new Input System uses a different update cycle that’s independent ofMonoBehaviour‘s lifecycle. You can read more about it inUnity’s Execution Order documentation.
The system offers threeUpdate Modesto control the update cycle. You can configure them inProject Settings ▸ Input System Package ▸ Update Mode.
Take a look at each of these nodes:
Dynamic Update: Processes events atirregular intervalsdetermined by the current frame rate. This is the default setting.
Fixed Update: Processes events atfixed-length intervals.Time.fixedDeltaTimedetermines the length of the interval.
Manually:Events aren’t processed automatically; you process them when you callInputSystem.Update(). If you want a check similar to the old system, you can callInputSystem.Update()inUpdate().
These new options, as part of the new Input System, give you a lot more control over input, whilst also making it easier to support multiple input devices :]
Where to Go from Here?
Download the completed project using theDownload Materialsbutton at the top or bottom of this tutorial.
In this Unity Input tutorial, you’ve learned:
The basic layout of the new Input System.
How to use actions and bindings.
How to handle different kinds of player input efficiently.
To test your skill, try to add aPauseaction to the game!
For the past couple of weeks, I have been trying to replicate the Photoshop blend modes in Unity. It is no easy task; despite the advances of modern graphics hardware, the blend unit still resists being programmable and will probably remain fixed for some time. SomeOpenGL ES extensions implement this functionality, but most hardware and APIs don’t. So what options do we have?
1) Backbuffer copy
A common approach is to copy the entire backbuffer before doing the blending. This is what Unity does. After that it’s trivial to implement any blending you want in shader code. The obvious problem with this approach is that you need to do a full backbuffer copy before you do the blending operation. There are certainly some possible optimizations like only copying what you need to a smaller texture of some sort, but it gets complicated once you have many objects using blend modes. You can also do just a single backbuffer copy and re-use it, but then you can’t stack different blended objects on top of each other. In Unity, this is done via aGrabPass. It is the approach used by theBlend Modesplugin.
2) Leveraging the Blend Unit
Modern GPUs have a little unit at the end of the graphics pipeline called the Output Merger. It’s the hardware responsible for getting the output of a pixel shader and blending it with the backbuffer. It’s not programmable, as to do so has quite a lot of complications (you can read about ithere) so current GPUs don’t have one.
The blend mode formulas were obtainedhereandhere. Use it as reference to compare it with what I provide. There are many other sources. One thing I’ve noticed is that provided formulas often neglect to mention that Photoshop actually uses modified formulas and clamps quantities in a different manner, especially when dealing with alpha.Gimp does the same. This is my experience recreating the Photoshop blend modes exclusively using a combination of blend unit and shaders. The first few blend modes are simple, but as we progress we’ll have to resort to more and more tricks to get what we want.
Two caveats before we start. First off, Photoshop blend modes do their blending in sRGB space, which means if you do them in linear space they will look wrong. Generally this isn’t a problem, but due to the amount of trickery we’ll be doing for these blend modes, many of the values need to go beyond the 0 – 1 range, which means we need an HDR buffer to do the calculations. Unity can do this by setting the camera to beHDRin the camera settings, and also settingGammafor the color space in the Player Settings. This is clearly undesirable if you do your lighting calculations in linear space. In a custom engine you would probably be able to set this up in a different manner (to allow for linear lighting).
If you want to try the code out while you read ahead, download it here.
[File]
A) Darken
Formula
min(SrcColor, DstColor)
Shader Output
color.rgb=lerp(float3(1,1,1),color.rgb,color.a);
Blend Unit
Min(SrcColor · One, DstColor · One)
As alpha approaches 0, we need to tend the minimum value to DstColor, by forcing SrcColor to be the maximum possible color float3(1, 1, 1)
B) Multiply
Formula
SrcColor · DstColor
Shader Output
color.rgb=color.rgb *color.a;
Blend Unit
SrcColor · DstColor+DstColor · OneMinusSrcAlpha
C) Color Burn
Formula
1 – (1 – DstColor) / SrcColor
Shader Output
color.rgb=1.0-(1.0/max(0.001,color.rgb *color.a+1.0-color.a));// max to avoid infinity
You can see discrepancies between the Photoshop and the Unity version in the alpha blending, especially at the edges.
H) Linear Dodge
Formula
SrcColor + DstColor
Shader Output
color.rgb=color.rgb;
Blend Unit
SrcColor · SrcAlpha+DstColor · One
This one also exhibits color “bleeding” at the edges. To be honest I prefer the one to the right just because it looks more “alive” than the other one. Same goes for Color Dodge. However this limits the 1-to-1 mapping to Photoshop/Gimp.
All of the previous blend modes have simple formulas and one way or another they can be implemented via a few instructions and the correct blending mode. However, some blend modes have conditional behavior or complex expressions (complex relative to the blend unit) that need a bit of re-thinking. Most of the blend modes that follow needed atwo-passapproach (using thePass syntax in your shader). Two-pass shaders in Unity have a limitation in that the two passes aren’t guaranteed to render one after the other for a given material. These blend modes rely on the previous pass, so you’ll get weird artifacts. If you have two overlapping sprites (as in a 2D game, such as our use case) the sorting will be undefined. The workaround around this is to move theOrder in Layerproperty to force them to sort properly.
How I ended up with Overlay requires an explanation. We take the original formula and approximate via a linear blend:
We simplify as much as we can and end up with this
The only way I found to get DstColor · DstColor is to isolate the term and do it in two passes, therefore we extract the same factor in both sides:
However this formula doesn’t take alpha into account. We still need to linearly interpolate this big formula with alpha, where an alpha of 0 should return Dst. Therefore
If we include the last term into the original formula, we can still do it in 2 passes. We need to be careful to clamp the alpha value with max(0.001, a) because we’re now potentially dividing by 0. The final formula is
For the Soft Light we apply a very similar reasoning to Overlay, which in the end leads us toPegtop’s formula. Both are different from Photoshop’s version in that they don’t have discontinuities. This one also has a darker fringe when alpha blending.
Hard Light has a very delicate hack that allows it to work and blend with alpha. In the first pass we divide by some magic number, only to multiply it back in the second pass! That’s because when alpha is 0 it needs to result in DstColor, but it was resulting in black.
[29/04/2019]Roman in the comments below reports that he couldn’t get Linear Light to work using the proposed method and found an alternative. His reasoning is that the output color becomes negative which gets clamped. I’m not sure what changed in Unity between when I did it and now but perhaps it relied on having an RGBA16F render target which may have changed since then to some other HDR format such as RG11B10F or RGB10A2 which do not support negative values. His alternative becomes (using RevSub as the blend op):
1.1. Unpair Device 권장(Xcode - Window > Devices and Simulators > 해당 디바이스에서 우클릭 > Unpari Device)
2. 아이폰 케이블 연결해서 '신뢰' 체크
3. Xcode 다시 켜서 Devices and Simulators에 들어가서 에러 없는지 확인
4. 없으면 OK
Follow this: https://developer.apple.com/forums/thread/650077
Also make sure to shut down your iPhone, start it back up.
Then I would also suggest unpairing the device from Xcode (Window > Devices Simulator....).
And then Clean build folder, and quit and restart xcode again!
GoogleMobileAds iOS SDK 7.68 이상은 Firebase 7.x 이상에서 지원합니다.
Firebase 업데이트 하세요
Unity: 2019.4.8f1
AdMob v5.4.0
Firebase v6.16.1 (Messaging & Analytics)
Target minimum iOS Version 12.0
Problem
I can't build the project. XCode error:
../Libraries/Plugins/iOS/GADUAdLoader.h:5:9: 'GoogleMobileAds/GoogleMobileAds.h' file not found
When I trying to update pods terminal throw the next error:
[!] CocoaPods could not find compatible versions for pod "GoogleAppMeasurement":
In Podfile:
Firebase/Analytics (= 6.32.2) was resolved to 6.32.2, which depends on
Firebase/Core (= 6.32.2) was resolved to 6.32.2, which depends on
FirebaseAnalytics (= 6.8.2) was resolved to 6.8.2, which depends on
GoogleAppMeasurement (= 6.8.2)
Google-Mobile-Ads-SDK (~> 7.68) was resolved to 7.68.0, which depends on
GoogleAppMeasurement (~> 7.0)
Attempts
Add 'pod 'GoogleAppMeasurement', '7.0'' to Podfile.
Result
CocoaPods could not find compatible versions for pod "GoogleAppMeasurement":
In Podfile:
Firebase/Analytics (= 6.32.2) was resolved to 6.32.2, which depends on
Firebase/Core (= 6.32.2) was resolved to 6.32.2, which depends on
FirebaseAnalytics (= 6.8.2) was resolved to 6.8.2, which depends on
GoogleAppMeasurement (= 6.8.2)
Google-Mobile-Ads-SDK (~> 7.68) was resolved to 7.68.0, which depends on
GoogleAppMeasurement (~> 7.0)
GoogleAppMeasurement (= 7.0)
Uninstall and install cocoapods
Result
Same error
Project Podfile
source 'https://github.com/CocoaPods/Specs.git'
source 'https://github.com/CocoaPods/Specs'
platform :ios, '12.0'
target 'UnityFramework' do
pod 'Firebase/Analytics', '6.32.2'
pod 'Firebase/Core', '6.32.2'
pod 'Firebase/Messaging', '6.32.2'
pod 'Google-Mobile-Ads-SDK', '~> 7.68'
end
How can I resolve this problem?
[Answer]
Google-Mobile-Ads-SDKversion 7.68 is only compatible with Firebase 7.x. If you want to use Firebase 6.x, you need to use 7.67 or earlier.
Podfile.lock에는 CHECKSUM이 부여됩니다. Podfile.lock의 유일성을 보증하는 해쉬값인 셈이죠. 만약 버전에 변경이 생기면 CHECKSUM 또한 변하게 됩니다.
pod update
pod update {팟이름}을 실행시키면, 코코아팟은 해당 팟의 업데이트된 버전이 있는지 검색합니다. Podfile.lock을 참조하지 않죠. 이 명령어는 팟을최신 버전으로 업데이트시켜주는 것입니다. (단, Podfile의 버전 조건과 일치해야 합니다.) 단순하게pod update만 실행시키면 코코아팟은 모든 팟에 대해 가능한 최신 버전으로 업데이트를 실행합니다.
pod outdated
pod outdated를 실행하면, 코코아팟은 Podfile.lock에 리스트된 것보다 새로운 버전을 가진 모든 팟을 나열합니다. 이 팟들에 대해pod update {팟이름}을 실행한다면 업데이트가 될 것이라는 것을 의미합니다. (역시나 Podfile의 버전 조건과 부합하는 한!)
pod repo update
/Users/{사용자이름}/.cocoapods/repos에 있는 모든podspec파일을 업데이트 합니다.podspec파일에는 해당 pod 의 주소 등 중요한 정보들이 담겨있습니다.
~/.cocoapods/repos에는 모든 pod에 대해 가능한 버전들의podspec파일들이 모여있습니다.pod repo update를 실행하게 되면 최신podspec파일들로 업데이트되게 되는 것입니다. 추가한 라이브러리에 대한podspec이 업데이트되지 않아 오류가 날 경우 이 명령어를 통하여 해결할 수 있습니다.
Podfile.lock을 커밋하세요!
동료와 같이 협업하고 있다면! 꼭Podfile.lock을 공유해야 합니다. pod 버전을 모두가 동일하게 쓰도록 유지시키는 역할을 하는 것이죠. 그리고 Podfile이 수정될 일이 생긴다면pod install명령어를 통해 의존성을 관리하면 됩니다. 만약 동료들과 같은 CHECKSUM을 얻는데에 실패했다면 간단하게rm -rf Pods && pod install을 실행하면 됩니다. 😎